00:00:00.001 Started by upstream project "autotest-per-patch" build number 126222 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.109 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.110 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.203 Using shallow fetch with depth 1 00:00:00.203 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.203 > git --version # timeout=10 00:00:00.244 > git --version # 'git version 2.39.2' 00:00:00.244 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.276 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.276 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.907 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.919 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.931 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.931 > git config core.sparsecheckout # timeout=10 00:00:04.941 > git read-tree -mu HEAD # timeout=10 00:00:04.959 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.981 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.981 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.101 [Pipeline] Start of Pipeline 00:00:05.115 [Pipeline] library 00:00:05.116 Loading library shm_lib@master 00:00:05.116 Library shm_lib@master is cached. Copying from home. 00:00:05.133 [Pipeline] node 00:00:05.142 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.143 [Pipeline] { 00:00:05.151 [Pipeline] catchError 00:00:05.153 [Pipeline] { 00:00:05.165 [Pipeline] wrap 00:00:05.172 [Pipeline] { 00:00:05.178 [Pipeline] stage 00:00:05.179 [Pipeline] { (Prologue) 00:00:05.385 [Pipeline] sh 00:00:05.668 + logger -p user.info -t JENKINS-CI 00:00:05.689 [Pipeline] echo 00:00:05.691 Node: GP11 00:00:05.697 [Pipeline] sh 00:00:05.994 [Pipeline] setCustomBuildProperty 00:00:06.006 [Pipeline] echo 00:00:06.008 Cleanup processes 00:00:06.014 [Pipeline] sh 00:00:06.297 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.297 3093339 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.310 [Pipeline] sh 00:00:06.594 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.594 ++ grep -v 'sudo pgrep' 00:00:06.594 ++ awk '{print $1}' 00:00:06.594 + sudo kill -9 00:00:06.594 + true 00:00:06.607 [Pipeline] cleanWs 00:00:06.614 [WS-CLEANUP] Deleting project workspace... 00:00:06.614 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.621 [WS-CLEANUP] done 00:00:06.624 [Pipeline] setCustomBuildProperty 00:00:06.635 [Pipeline] sh 00:00:06.915 + sudo git config --global --replace-all safe.directory '*' 00:00:06.979 [Pipeline] httpRequest 00:00:07.008 [Pipeline] echo 00:00:07.010 Sorcerer 10.211.164.101 is alive 00:00:07.018 [Pipeline] httpRequest 00:00:07.023 HttpMethod: GET 00:00:07.023 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.024 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.044 Response Code: HTTP/1.1 200 OK 00:00:07.044 Success: Status code 200 is in the accepted range: 200,404 00:00:07.045 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:24.109 [Pipeline] sh 00:00:24.392 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:24.413 [Pipeline] httpRequest 00:00:24.433 [Pipeline] echo 00:00:24.435 Sorcerer 10.211.164.101 is alive 00:00:24.444 [Pipeline] httpRequest 00:00:24.448 HttpMethod: GET 00:00:24.449 URL: http://10.211.164.101/packages/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:24.449 Sending request to url: http://10.211.164.101/packages/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:24.452 Response Code: HTTP/1.1 200 OK 00:00:24.452 Success: Status code 200 is in the accepted range: 200,404 00:00:24.452 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:40.676 [Pipeline] sh 00:00:40.955 + tar --no-same-owner -xf spdk_a22f117fe5f0b0fdd392a07d6811ed9bd7a0a55f.tar.gz 00:00:44.254 [Pipeline] sh 00:00:44.553 + git -C spdk log --oneline -n5 00:00:44.553 a22f117fe nvme/perf: Use sqthread_poll_cpu for io_uring workloads 00:00:44.553 719d03c6a sock/uring: only register net impl if supported 00:00:44.553 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:44.553 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:44.553 6c7c1f57e accel: add sequence outstanding stat 00:00:44.566 [Pipeline] } 00:00:44.584 [Pipeline] // stage 00:00:44.594 [Pipeline] stage 00:00:44.598 [Pipeline] { (Prepare) 00:00:44.619 [Pipeline] writeFile 00:00:44.638 [Pipeline] sh 00:00:44.920 + logger -p user.info -t JENKINS-CI 00:00:44.933 [Pipeline] sh 00:00:45.215 + logger -p user.info -t JENKINS-CI 00:00:45.229 [Pipeline] sh 00:00:45.511 + cat autorun-spdk.conf 00:00:45.511 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.512 SPDK_TEST_NVMF=1 00:00:45.512 SPDK_TEST_NVME_CLI=1 00:00:45.512 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.512 SPDK_TEST_NVMF_NICS=e810 00:00:45.512 SPDK_TEST_VFIOUSER=1 00:00:45.512 SPDK_RUN_UBSAN=1 00:00:45.512 NET_TYPE=phy 00:00:45.519 RUN_NIGHTLY=0 00:00:45.523 [Pipeline] readFile 00:00:45.552 [Pipeline] withEnv 00:00:45.554 [Pipeline] { 00:00:45.569 [Pipeline] sh 00:00:45.856 + set -ex 00:00:45.856 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:45.856 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.856 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.856 ++ SPDK_TEST_NVMF=1 00:00:45.856 ++ SPDK_TEST_NVME_CLI=1 00:00:45.856 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.856 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.856 ++ SPDK_TEST_VFIOUSER=1 00:00:45.856 ++ SPDK_RUN_UBSAN=1 00:00:45.856 ++ NET_TYPE=phy 00:00:45.856 ++ RUN_NIGHTLY=0 00:00:45.856 + case $SPDK_TEST_NVMF_NICS in 00:00:45.856 + DRIVERS=ice 00:00:45.856 + [[ tcp == \r\d\m\a ]] 00:00:45.856 + [[ -n ice ]] 00:00:45.856 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:45.856 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:45.856 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:45.856 rmmod: ERROR: Module irdma is not currently loaded 00:00:45.856 rmmod: ERROR: Module i40iw is not currently loaded 00:00:45.856 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:45.856 + true 00:00:45.856 + for D in $DRIVERS 00:00:45.856 + sudo modprobe ice 00:00:45.856 + exit 0 00:00:45.866 [Pipeline] } 00:00:45.885 [Pipeline] // withEnv 00:00:45.891 [Pipeline] } 00:00:45.909 [Pipeline] // stage 00:00:45.921 [Pipeline] catchError 00:00:45.922 [Pipeline] { 00:00:45.940 [Pipeline] timeout 00:00:45.940 Timeout set to expire in 50 min 00:00:45.942 [Pipeline] { 00:00:45.957 [Pipeline] stage 00:00:45.960 [Pipeline] { (Tests) 00:00:45.977 [Pipeline] sh 00:00:46.262 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.262 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.262 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.262 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:46.262 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.262 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.262 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:46.262 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.262 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.262 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.262 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:46.262 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.262 + source /etc/os-release 00:00:46.263 ++ NAME='Fedora Linux' 00:00:46.263 ++ VERSION='38 (Cloud Edition)' 00:00:46.263 ++ ID=fedora 00:00:46.263 ++ VERSION_ID=38 00:00:46.263 ++ VERSION_CODENAME= 00:00:46.263 ++ PLATFORM_ID=platform:f38 00:00:46.263 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:46.263 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.263 ++ LOGO=fedora-logo-icon 00:00:46.263 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:46.263 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.263 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:46.263 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.263 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.263 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.263 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:46.263 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.263 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:46.263 ++ SUPPORT_END=2024-05-14 00:00:46.263 ++ VARIANT='Cloud Edition' 00:00:46.263 ++ VARIANT_ID=cloud 00:00:46.263 + uname -a 00:00:46.263 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:46.263 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:47.197 Hugepages 00:00:47.197 node hugesize free / total 00:00:47.197 node0 1048576kB 0 / 0 00:00:47.197 node0 2048kB 0 / 0 00:00:47.197 node1 1048576kB 0 / 0 00:00:47.197 node1 2048kB 0 / 0 00:00:47.197 00:00:47.197 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:47.197 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:47.197 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:47.197 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:47.197 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:47.197 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:47.197 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:47.197 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:47.197 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:47.197 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:47.197 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:47.197 + rm -f /tmp/spdk-ld-path 00:00:47.197 + source autorun-spdk.conf 00:00:47.197 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.197 ++ SPDK_TEST_NVMF=1 00:00:47.197 ++ SPDK_TEST_NVME_CLI=1 00:00:47.197 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.197 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.197 ++ SPDK_TEST_VFIOUSER=1 00:00:47.197 ++ SPDK_RUN_UBSAN=1 00:00:47.197 ++ NET_TYPE=phy 00:00:47.197 ++ RUN_NIGHTLY=0 00:00:47.197 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:47.197 + [[ -n '' ]] 00:00:47.197 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.197 + for M in /var/spdk/build-*-manifest.txt 00:00:47.197 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:47.197 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.197 + for M in /var/spdk/build-*-manifest.txt 00:00:47.197 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:47.197 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.456 ++ uname 00:00:47.456 + [[ Linux == \L\i\n\u\x ]] 00:00:47.456 + sudo dmesg -T 00:00:47.456 + sudo dmesg --clear 00:00:47.456 + dmesg_pid=3094024 00:00:47.456 + [[ Fedora Linux == FreeBSD ]] 00:00:47.456 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.456 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.456 + sudo dmesg -Tw 00:00:47.456 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:47.456 + [[ -x /usr/src/fio-static/fio ]] 00:00:47.456 + export FIO_BIN=/usr/src/fio-static/fio 00:00:47.456 + FIO_BIN=/usr/src/fio-static/fio 00:00:47.456 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:47.456 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:47.456 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:47.456 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.456 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.456 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:47.456 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.456 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.456 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.456 Test configuration: 00:00:47.456 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.456 SPDK_TEST_NVMF=1 00:00:47.456 SPDK_TEST_NVME_CLI=1 00:00:47.456 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.456 SPDK_TEST_NVMF_NICS=e810 00:00:47.456 SPDK_TEST_VFIOUSER=1 00:00:47.456 SPDK_RUN_UBSAN=1 00:00:47.456 NET_TYPE=phy 00:00:47.456 RUN_NIGHTLY=0 18:55:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:47.456 18:55:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:47.456 18:55:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:47.456 18:55:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:47.456 18:55:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.456 18:55:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.456 18:55:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.456 18:55:27 -- paths/export.sh@5 -- $ export PATH 00:00:47.456 18:55:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:47.456 18:55:27 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:47.456 18:55:27 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:47.456 18:55:27 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721062527.XXXXXX 00:00:47.456 18:55:27 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721062527.1mzncp 00:00:47.456 18:55:27 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:47.456 18:55:27 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:47.456 18:55:27 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:47.456 18:55:27 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:47.456 18:55:27 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:47.456 18:55:27 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:47.456 18:55:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:47.456 18:55:27 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.456 18:55:27 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:47.456 18:55:27 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:47.456 18:55:27 -- pm/common@17 -- $ local monitor 00:00:47.456 18:55:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.456 18:55:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.456 18:55:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.456 18:55:27 -- pm/common@21 -- $ date +%s 00:00:47.456 18:55:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:47.456 18:55:27 -- pm/common@21 -- $ date +%s 00:00:47.456 18:55:27 -- pm/common@25 -- $ sleep 1 00:00:47.456 18:55:27 -- pm/common@21 -- $ date +%s 00:00:47.456 18:55:27 -- pm/common@21 -- $ date +%s 00:00:47.456 18:55:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062527 00:00:47.456 18:55:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062527 00:00:47.456 18:55:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062527 00:00:47.456 18:55:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721062527 00:00:47.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062527_collect-vmstat.pm.log 00:00:47.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062527_collect-cpu-load.pm.log 00:00:47.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062527_collect-cpu-temp.pm.log 00:00:47.456 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721062527_collect-bmc-pm.bmc.pm.log 00:00:48.390 18:55:28 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:48.390 18:55:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:48.390 18:55:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:48.390 18:55:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.390 18:55:28 -- spdk/autobuild.sh@16 -- $ date -u 00:00:48.390 Mon Jul 15 04:55:28 PM UTC 2024 00:00:48.390 18:55:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:48.390 v24.09-pre-203-ga22f117fe 00:00:48.390 18:55:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:48.391 18:55:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:48.391 18:55:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:48.391 18:55:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:48.391 18:55:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:48.391 18:55:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.391 ************************************ 00:00:48.391 START TEST ubsan 00:00:48.391 ************************************ 00:00:48.391 18:55:28 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:48.391 using ubsan 00:00:48.391 00:00:48.391 real 0m0.000s 00:00:48.391 user 0m0.000s 00:00:48.391 sys 0m0.000s 00:00:48.391 18:55:28 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:48.391 18:55:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:48.391 ************************************ 00:00:48.391 END TEST ubsan 00:00:48.391 ************************************ 00:00:48.391 18:55:28 -- common/autotest_common.sh@1142 -- $ return 0 00:00:48.391 18:55:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:48.391 18:55:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:48.391 18:55:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:48.391 18:55:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:48.391 18:55:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:48.391 18:55:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:48.391 18:55:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:48.391 18:55:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:48.391 18:55:28 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:48.648 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:48.648 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:48.907 Using 'verbs' RDMA provider 00:00:59.466 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:09.446 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:09.447 Creating mk/config.mk...done. 00:01:09.447 Creating mk/cc.flags.mk...done. 00:01:09.447 Type 'make' to build. 00:01:09.447 18:55:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:09.447 18:55:49 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.447 18:55:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.447 18:55:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.447 ************************************ 00:01:09.447 START TEST make 00:01:09.447 ************************************ 00:01:09.447 18:55:49 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:09.447 make[1]: Nothing to be done for 'all'. 00:01:10.834 The Meson build system 00:01:10.834 Version: 1.3.1 00:01:10.834 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:10.834 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:10.834 Build type: native build 00:01:10.834 Project name: libvfio-user 00:01:10.834 Project version: 0.0.1 00:01:10.834 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:10.834 C linker for the host machine: cc ld.bfd 2.39-16 00:01:10.834 Host machine cpu family: x86_64 00:01:10.834 Host machine cpu: x86_64 00:01:10.834 Run-time dependency threads found: YES 00:01:10.834 Library dl found: YES 00:01:10.834 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:10.834 Run-time dependency json-c found: YES 0.17 00:01:10.834 Run-time dependency cmocka found: YES 1.1.7 00:01:10.834 Program pytest-3 found: NO 00:01:10.834 Program flake8 found: NO 00:01:10.834 Program misspell-fixer found: NO 00:01:10.834 Program restructuredtext-lint found: NO 00:01:10.834 Program valgrind found: YES (/usr/bin/valgrind) 00:01:10.834 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:10.834 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:10.834 Compiler for C supports arguments -Wwrite-strings: YES 00:01:10.834 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:10.834 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:10.834 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:10.834 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:10.834 Build targets in project: 8 00:01:10.834 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:10.834 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:10.834 00:01:10.834 libvfio-user 0.0.1 00:01:10.834 00:01:10.834 User defined options 00:01:10.834 buildtype : debug 00:01:10.834 default_library: shared 00:01:10.834 libdir : /usr/local/lib 00:01:10.834 00:01:10.834 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.799 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:11.799 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:11.799 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:12.059 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:12.059 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:12.059 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:12.059 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:12.059 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:12.059 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:12.059 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:12.059 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:12.059 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:12.059 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:12.059 [13/37] Compiling C object samples/null.p/null.c.o 00:01:12.059 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:12.059 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:12.059 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:12.059 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:12.059 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:12.059 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:12.059 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:12.059 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:12.059 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:12.059 [23/37] Compiling C object samples/server.p/server.c.o 00:01:12.059 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:12.059 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:12.059 [26/37] Compiling C object samples/client.p/client.c.o 00:01:12.318 [27/37] Linking target samples/client 00:01:12.318 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:12.318 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:12.318 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:12.318 [31/37] Linking target test/unit_tests 00:01:12.582 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:12.582 [33/37] Linking target samples/null 00:01:12.582 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:12.582 [35/37] Linking target samples/gpio-pci-idio-16 00:01:12.582 [36/37] Linking target samples/server 00:01:12.582 [37/37] Linking target samples/lspci 00:01:12.582 INFO: autodetecting backend as ninja 00:01:12.582 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:12.848 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:13.417 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:13.417 ninja: no work to do. 00:01:18.691 The Meson build system 00:01:18.691 Version: 1.3.1 00:01:18.691 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:18.691 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:18.691 Build type: native build 00:01:18.691 Program cat found: YES (/usr/bin/cat) 00:01:18.691 Project name: DPDK 00:01:18.691 Project version: 24.03.0 00:01:18.691 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:18.691 C linker for the host machine: cc ld.bfd 2.39-16 00:01:18.691 Host machine cpu family: x86_64 00:01:18.691 Host machine cpu: x86_64 00:01:18.691 Message: ## Building in Developer Mode ## 00:01:18.691 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:18.691 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:18.691 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:18.691 Program python3 found: YES (/usr/bin/python3) 00:01:18.691 Program cat found: YES (/usr/bin/cat) 00:01:18.691 Compiler for C supports arguments -march=native: YES 00:01:18.691 Checking for size of "void *" : 8 00:01:18.691 Checking for size of "void *" : 8 (cached) 00:01:18.691 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:18.691 Library m found: YES 00:01:18.691 Library numa found: YES 00:01:18.691 Has header "numaif.h" : YES 00:01:18.691 Library fdt found: NO 00:01:18.691 Library execinfo found: NO 00:01:18.691 Has header "execinfo.h" : YES 00:01:18.691 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:18.691 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:18.691 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:18.691 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:18.691 Run-time dependency openssl found: YES 3.0.9 00:01:18.691 Run-time dependency libpcap found: YES 1.10.4 00:01:18.691 Has header "pcap.h" with dependency libpcap: YES 00:01:18.691 Compiler for C supports arguments -Wcast-qual: YES 00:01:18.691 Compiler for C supports arguments -Wdeprecated: YES 00:01:18.691 Compiler for C supports arguments -Wformat: YES 00:01:18.691 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:18.691 Compiler for C supports arguments -Wformat-security: NO 00:01:18.691 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:18.691 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:18.691 Compiler for C supports arguments -Wnested-externs: YES 00:01:18.691 Compiler for C supports arguments -Wold-style-definition: YES 00:01:18.691 Compiler for C supports arguments -Wpointer-arith: YES 00:01:18.691 Compiler for C supports arguments -Wsign-compare: YES 00:01:18.691 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:18.691 Compiler for C supports arguments -Wundef: YES 00:01:18.692 Compiler for C supports arguments -Wwrite-strings: YES 00:01:18.692 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:18.692 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:18.692 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:18.692 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:18.692 Program objdump found: YES (/usr/bin/objdump) 00:01:18.692 Compiler for C supports arguments -mavx512f: YES 00:01:18.692 Checking if "AVX512 checking" compiles: YES 00:01:18.692 Fetching value of define "__SSE4_2__" : 1 00:01:18.692 Fetching value of define "__AES__" : 1 00:01:18.692 Fetching value of define "__AVX__" : 1 00:01:18.692 Fetching value of define "__AVX2__" : (undefined) 00:01:18.692 Fetching value of define "__AVX512BW__" : (undefined) 00:01:18.692 Fetching value of define "__AVX512CD__" : (undefined) 00:01:18.692 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:18.692 Fetching value of define "__AVX512F__" : (undefined) 00:01:18.692 Fetching value of define "__AVX512VL__" : (undefined) 00:01:18.692 Fetching value of define "__PCLMUL__" : 1 00:01:18.692 Fetching value of define "__RDRND__" : 1 00:01:18.692 Fetching value of define "__RDSEED__" : (undefined) 00:01:18.692 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:18.692 Fetching value of define "__znver1__" : (undefined) 00:01:18.692 Fetching value of define "__znver2__" : (undefined) 00:01:18.692 Fetching value of define "__znver3__" : (undefined) 00:01:18.692 Fetching value of define "__znver4__" : (undefined) 00:01:18.692 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:18.692 Message: lib/log: Defining dependency "log" 00:01:18.692 Message: lib/kvargs: Defining dependency "kvargs" 00:01:18.692 Message: lib/telemetry: Defining dependency "telemetry" 00:01:18.692 Checking for function "getentropy" : NO 00:01:18.692 Message: lib/eal: Defining dependency "eal" 00:01:18.692 Message: lib/ring: Defining dependency "ring" 00:01:18.692 Message: lib/rcu: Defining dependency "rcu" 00:01:18.692 Message: lib/mempool: Defining dependency "mempool" 00:01:18.692 Message: lib/mbuf: Defining dependency "mbuf" 00:01:18.692 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:18.692 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:18.692 Compiler for C supports arguments -mpclmul: YES 00:01:18.692 Compiler for C supports arguments -maes: YES 00:01:18.692 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:18.692 Compiler for C supports arguments -mavx512bw: YES 00:01:18.692 Compiler for C supports arguments -mavx512dq: YES 00:01:18.692 Compiler for C supports arguments -mavx512vl: YES 00:01:18.692 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:18.692 Compiler for C supports arguments -mavx2: YES 00:01:18.692 Compiler for C supports arguments -mavx: YES 00:01:18.692 Message: lib/net: Defining dependency "net" 00:01:18.692 Message: lib/meter: Defining dependency "meter" 00:01:18.692 Message: lib/ethdev: Defining dependency "ethdev" 00:01:18.692 Message: lib/pci: Defining dependency "pci" 00:01:18.692 Message: lib/cmdline: Defining dependency "cmdline" 00:01:18.692 Message: lib/hash: Defining dependency "hash" 00:01:18.692 Message: lib/timer: Defining dependency "timer" 00:01:18.692 Message: lib/compressdev: Defining dependency "compressdev" 00:01:18.692 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:18.692 Message: lib/dmadev: Defining dependency "dmadev" 00:01:18.692 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:18.692 Message: lib/power: Defining dependency "power" 00:01:18.692 Message: lib/reorder: Defining dependency "reorder" 00:01:18.692 Message: lib/security: Defining dependency "security" 00:01:18.692 Has header "linux/userfaultfd.h" : YES 00:01:18.692 Has header "linux/vduse.h" : YES 00:01:18.692 Message: lib/vhost: Defining dependency "vhost" 00:01:18.692 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:18.692 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:18.692 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:18.692 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:18.692 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:18.692 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:18.692 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:18.692 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:18.692 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:18.692 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:18.692 Program doxygen found: YES (/usr/bin/doxygen) 00:01:18.692 Configuring doxy-api-html.conf using configuration 00:01:18.692 Configuring doxy-api-man.conf using configuration 00:01:18.692 Program mandb found: YES (/usr/bin/mandb) 00:01:18.692 Program sphinx-build found: NO 00:01:18.692 Configuring rte_build_config.h using configuration 00:01:18.692 Message: 00:01:18.692 ================= 00:01:18.692 Applications Enabled 00:01:18.692 ================= 00:01:18.692 00:01:18.692 apps: 00:01:18.692 00:01:18.692 00:01:18.692 Message: 00:01:18.692 ================= 00:01:18.692 Libraries Enabled 00:01:18.692 ================= 00:01:18.692 00:01:18.692 libs: 00:01:18.692 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:18.692 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:18.692 cryptodev, dmadev, power, reorder, security, vhost, 00:01:18.692 00:01:18.692 Message: 00:01:18.692 =============== 00:01:18.692 Drivers Enabled 00:01:18.692 =============== 00:01:18.692 00:01:18.692 common: 00:01:18.692 00:01:18.692 bus: 00:01:18.692 pci, vdev, 00:01:18.692 mempool: 00:01:18.692 ring, 00:01:18.692 dma: 00:01:18.692 00:01:18.692 net: 00:01:18.692 00:01:18.692 crypto: 00:01:18.692 00:01:18.692 compress: 00:01:18.692 00:01:18.692 vdpa: 00:01:18.692 00:01:18.692 00:01:18.692 Message: 00:01:18.692 ================= 00:01:18.692 Content Skipped 00:01:18.692 ================= 00:01:18.692 00:01:18.692 apps: 00:01:18.692 dumpcap: explicitly disabled via build config 00:01:18.692 graph: explicitly disabled via build config 00:01:18.692 pdump: explicitly disabled via build config 00:01:18.692 proc-info: explicitly disabled via build config 00:01:18.692 test-acl: explicitly disabled via build config 00:01:18.692 test-bbdev: explicitly disabled via build config 00:01:18.692 test-cmdline: explicitly disabled via build config 00:01:18.692 test-compress-perf: explicitly disabled via build config 00:01:18.692 test-crypto-perf: explicitly disabled via build config 00:01:18.692 test-dma-perf: explicitly disabled via build config 00:01:18.692 test-eventdev: explicitly disabled via build config 00:01:18.692 test-fib: explicitly disabled via build config 00:01:18.692 test-flow-perf: explicitly disabled via build config 00:01:18.692 test-gpudev: explicitly disabled via build config 00:01:18.692 test-mldev: explicitly disabled via build config 00:01:18.692 test-pipeline: explicitly disabled via build config 00:01:18.692 test-pmd: explicitly disabled via build config 00:01:18.692 test-regex: explicitly disabled via build config 00:01:18.692 test-sad: explicitly disabled via build config 00:01:18.692 test-security-perf: explicitly disabled via build config 00:01:18.692 00:01:18.692 libs: 00:01:18.692 argparse: explicitly disabled via build config 00:01:18.692 metrics: explicitly disabled via build config 00:01:18.692 acl: explicitly disabled via build config 00:01:18.692 bbdev: explicitly disabled via build config 00:01:18.692 bitratestats: explicitly disabled via build config 00:01:18.692 bpf: explicitly disabled via build config 00:01:18.692 cfgfile: explicitly disabled via build config 00:01:18.692 distributor: explicitly disabled via build config 00:01:18.692 efd: explicitly disabled via build config 00:01:18.692 eventdev: explicitly disabled via build config 00:01:18.692 dispatcher: explicitly disabled via build config 00:01:18.692 gpudev: explicitly disabled via build config 00:01:18.692 gro: explicitly disabled via build config 00:01:18.692 gso: explicitly disabled via build config 00:01:18.692 ip_frag: explicitly disabled via build config 00:01:18.692 jobstats: explicitly disabled via build config 00:01:18.692 latencystats: explicitly disabled via build config 00:01:18.692 lpm: explicitly disabled via build config 00:01:18.692 member: explicitly disabled via build config 00:01:18.692 pcapng: explicitly disabled via build config 00:01:18.692 rawdev: explicitly disabled via build config 00:01:18.692 regexdev: explicitly disabled via build config 00:01:18.692 mldev: explicitly disabled via build config 00:01:18.692 rib: explicitly disabled via build config 00:01:18.692 sched: explicitly disabled via build config 00:01:18.692 stack: explicitly disabled via build config 00:01:18.692 ipsec: explicitly disabled via build config 00:01:18.692 pdcp: explicitly disabled via build config 00:01:18.692 fib: explicitly disabled via build config 00:01:18.692 port: explicitly disabled via build config 00:01:18.692 pdump: explicitly disabled via build config 00:01:18.692 table: explicitly disabled via build config 00:01:18.692 pipeline: explicitly disabled via build config 00:01:18.692 graph: explicitly disabled via build config 00:01:18.692 node: explicitly disabled via build config 00:01:18.692 00:01:18.692 drivers: 00:01:18.692 common/cpt: not in enabled drivers build config 00:01:18.692 common/dpaax: not in enabled drivers build config 00:01:18.692 common/iavf: not in enabled drivers build config 00:01:18.692 common/idpf: not in enabled drivers build config 00:01:18.692 common/ionic: not in enabled drivers build config 00:01:18.692 common/mvep: not in enabled drivers build config 00:01:18.692 common/octeontx: not in enabled drivers build config 00:01:18.692 bus/auxiliary: not in enabled drivers build config 00:01:18.692 bus/cdx: not in enabled drivers build config 00:01:18.692 bus/dpaa: not in enabled drivers build config 00:01:18.692 bus/fslmc: not in enabled drivers build config 00:01:18.692 bus/ifpga: not in enabled drivers build config 00:01:18.692 bus/platform: not in enabled drivers build config 00:01:18.692 bus/uacce: not in enabled drivers build config 00:01:18.692 bus/vmbus: not in enabled drivers build config 00:01:18.692 common/cnxk: not in enabled drivers build config 00:01:18.692 common/mlx5: not in enabled drivers build config 00:01:18.692 common/nfp: not in enabled drivers build config 00:01:18.692 common/nitrox: not in enabled drivers build config 00:01:18.692 common/qat: not in enabled drivers build config 00:01:18.692 common/sfc_efx: not in enabled drivers build config 00:01:18.692 mempool/bucket: not in enabled drivers build config 00:01:18.692 mempool/cnxk: not in enabled drivers build config 00:01:18.692 mempool/dpaa: not in enabled drivers build config 00:01:18.692 mempool/dpaa2: not in enabled drivers build config 00:01:18.692 mempool/octeontx: not in enabled drivers build config 00:01:18.692 mempool/stack: not in enabled drivers build config 00:01:18.692 dma/cnxk: not in enabled drivers build config 00:01:18.692 dma/dpaa: not in enabled drivers build config 00:01:18.693 dma/dpaa2: not in enabled drivers build config 00:01:18.693 dma/hisilicon: not in enabled drivers build config 00:01:18.693 dma/idxd: not in enabled drivers build config 00:01:18.693 dma/ioat: not in enabled drivers build config 00:01:18.693 dma/skeleton: not in enabled drivers build config 00:01:18.693 net/af_packet: not in enabled drivers build config 00:01:18.693 net/af_xdp: not in enabled drivers build config 00:01:18.693 net/ark: not in enabled drivers build config 00:01:18.693 net/atlantic: not in enabled drivers build config 00:01:18.693 net/avp: not in enabled drivers build config 00:01:18.693 net/axgbe: not in enabled drivers build config 00:01:18.693 net/bnx2x: not in enabled drivers build config 00:01:18.693 net/bnxt: not in enabled drivers build config 00:01:18.693 net/bonding: not in enabled drivers build config 00:01:18.693 net/cnxk: not in enabled drivers build config 00:01:18.693 net/cpfl: not in enabled drivers build config 00:01:18.693 net/cxgbe: not in enabled drivers build config 00:01:18.693 net/dpaa: not in enabled drivers build config 00:01:18.693 net/dpaa2: not in enabled drivers build config 00:01:18.693 net/e1000: not in enabled drivers build config 00:01:18.693 net/ena: not in enabled drivers build config 00:01:18.693 net/enetc: not in enabled drivers build config 00:01:18.693 net/enetfec: not in enabled drivers build config 00:01:18.693 net/enic: not in enabled drivers build config 00:01:18.693 net/failsafe: not in enabled drivers build config 00:01:18.693 net/fm10k: not in enabled drivers build config 00:01:18.693 net/gve: not in enabled drivers build config 00:01:18.693 net/hinic: not in enabled drivers build config 00:01:18.693 net/hns3: not in enabled drivers build config 00:01:18.693 net/i40e: not in enabled drivers build config 00:01:18.693 net/iavf: not in enabled drivers build config 00:01:18.693 net/ice: not in enabled drivers build config 00:01:18.693 net/idpf: not in enabled drivers build config 00:01:18.693 net/igc: not in enabled drivers build config 00:01:18.693 net/ionic: not in enabled drivers build config 00:01:18.693 net/ipn3ke: not in enabled drivers build config 00:01:18.693 net/ixgbe: not in enabled drivers build config 00:01:18.693 net/mana: not in enabled drivers build config 00:01:18.693 net/memif: not in enabled drivers build config 00:01:18.693 net/mlx4: not in enabled drivers build config 00:01:18.693 net/mlx5: not in enabled drivers build config 00:01:18.693 net/mvneta: not in enabled drivers build config 00:01:18.693 net/mvpp2: not in enabled drivers build config 00:01:18.693 net/netvsc: not in enabled drivers build config 00:01:18.693 net/nfb: not in enabled drivers build config 00:01:18.693 net/nfp: not in enabled drivers build config 00:01:18.693 net/ngbe: not in enabled drivers build config 00:01:18.693 net/null: not in enabled drivers build config 00:01:18.693 net/octeontx: not in enabled drivers build config 00:01:18.693 net/octeon_ep: not in enabled drivers build config 00:01:18.693 net/pcap: not in enabled drivers build config 00:01:18.693 net/pfe: not in enabled drivers build config 00:01:18.693 net/qede: not in enabled drivers build config 00:01:18.693 net/ring: not in enabled drivers build config 00:01:18.693 net/sfc: not in enabled drivers build config 00:01:18.693 net/softnic: not in enabled drivers build config 00:01:18.693 net/tap: not in enabled drivers build config 00:01:18.693 net/thunderx: not in enabled drivers build config 00:01:18.693 net/txgbe: not in enabled drivers build config 00:01:18.693 net/vdev_netvsc: not in enabled drivers build config 00:01:18.693 net/vhost: not in enabled drivers build config 00:01:18.693 net/virtio: not in enabled drivers build config 00:01:18.693 net/vmxnet3: not in enabled drivers build config 00:01:18.693 raw/*: missing internal dependency, "rawdev" 00:01:18.693 crypto/armv8: not in enabled drivers build config 00:01:18.693 crypto/bcmfs: not in enabled drivers build config 00:01:18.693 crypto/caam_jr: not in enabled drivers build config 00:01:18.693 crypto/ccp: not in enabled drivers build config 00:01:18.693 crypto/cnxk: not in enabled drivers build config 00:01:18.693 crypto/dpaa_sec: not in enabled drivers build config 00:01:18.693 crypto/dpaa2_sec: not in enabled drivers build config 00:01:18.693 crypto/ipsec_mb: not in enabled drivers build config 00:01:18.693 crypto/mlx5: not in enabled drivers build config 00:01:18.693 crypto/mvsam: not in enabled drivers build config 00:01:18.693 crypto/nitrox: not in enabled drivers build config 00:01:18.693 crypto/null: not in enabled drivers build config 00:01:18.693 crypto/octeontx: not in enabled drivers build config 00:01:18.693 crypto/openssl: not in enabled drivers build config 00:01:18.693 crypto/scheduler: not in enabled drivers build config 00:01:18.693 crypto/uadk: not in enabled drivers build config 00:01:18.693 crypto/virtio: not in enabled drivers build config 00:01:18.693 compress/isal: not in enabled drivers build config 00:01:18.693 compress/mlx5: not in enabled drivers build config 00:01:18.693 compress/nitrox: not in enabled drivers build config 00:01:18.693 compress/octeontx: not in enabled drivers build config 00:01:18.693 compress/zlib: not in enabled drivers build config 00:01:18.693 regex/*: missing internal dependency, "regexdev" 00:01:18.693 ml/*: missing internal dependency, "mldev" 00:01:18.693 vdpa/ifc: not in enabled drivers build config 00:01:18.693 vdpa/mlx5: not in enabled drivers build config 00:01:18.693 vdpa/nfp: not in enabled drivers build config 00:01:18.693 vdpa/sfc: not in enabled drivers build config 00:01:18.693 event/*: missing internal dependency, "eventdev" 00:01:18.693 baseband/*: missing internal dependency, "bbdev" 00:01:18.693 gpu/*: missing internal dependency, "gpudev" 00:01:18.693 00:01:18.693 00:01:18.693 Build targets in project: 85 00:01:18.693 00:01:18.693 DPDK 24.03.0 00:01:18.693 00:01:18.693 User defined options 00:01:18.693 buildtype : debug 00:01:18.693 default_library : shared 00:01:18.693 libdir : lib 00:01:18.693 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:18.693 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:18.693 c_link_args : 00:01:18.693 cpu_instruction_set: native 00:01:18.693 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:18.693 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:18.693 enable_docs : false 00:01:18.693 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:18.693 enable_kmods : false 00:01:18.693 max_lcores : 128 00:01:18.693 tests : false 00:01:18.693 00:01:18.693 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:18.693 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:18.693 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:18.693 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:18.693 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:18.693 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:18.693 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:18.693 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:18.693 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:18.693 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:18.693 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:18.693 [10/268] Linking static target lib/librte_kvargs.a 00:01:18.693 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:18.693 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:18.693 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:18.693 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:18.693 [15/268] Linking static target lib/librte_log.a 00:01:18.693 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:19.262 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.522 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:19.522 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:19.522 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:19.522 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:19.522 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:19.522 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:19.522 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.522 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:19.522 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:19.522 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:19.522 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:19.522 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:19.522 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:19.522 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:19.522 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:19.522 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:19.522 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:19.522 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:19.522 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:19.522 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:19.522 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:19.522 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:19.522 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:19.522 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:19.522 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:19.522 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:19.522 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:19.522 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:19.522 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:19.522 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:19.522 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:19.522 [49/268] Linking static target lib/librte_telemetry.a 00:01:19.522 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:19.522 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:19.522 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:19.522 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.522 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:19.522 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:19.522 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:19.522 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:19.782 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:19.782 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:19.782 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.782 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:19.782 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:19.783 [63/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.783 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:19.783 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:20.044 [66/268] Linking target lib/librte_log.so.24.1 00:01:20.044 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:20.044 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:20.044 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:20.044 [70/268] Linking static target lib/librte_pci.a 00:01:20.044 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:20.304 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:20.304 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:20.304 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:20.304 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:20.304 [76/268] Linking target lib/librte_kvargs.so.24.1 00:01:20.304 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:20.304 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:20.304 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:20.304 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:20.304 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:20.304 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:20.304 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:20.304 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:20.601 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:20.601 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:20.601 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:20.601 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:20.601 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:20.601 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:20.601 [91/268] Linking static target lib/librte_ring.a 00:01:20.601 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:20.601 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:20.601 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:20.601 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:20.601 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:20.601 [97/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.601 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:20.601 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:20.601 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:20.601 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:20.601 [102/268] Linking static target lib/librte_meter.a 00:01:20.601 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:20.601 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:20.601 [105/268] Linking target lib/librte_telemetry.so.24.1 00:01:20.601 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:20.601 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:20.601 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:20.601 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.601 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:20.601 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:20.601 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:20.601 [113/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:20.601 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:20.601 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:20.601 [116/268] Linking static target lib/librte_mempool.a 00:01:20.888 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:20.888 [118/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:20.888 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:20.888 [120/268] Linking static target lib/librte_eal.a 00:01:20.888 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:20.888 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:20.888 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:20.888 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:20.888 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:20.888 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:20.888 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:20.888 [128/268] Linking static target lib/librte_rcu.a 00:01:20.888 [129/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:20.888 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:20.888 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:20.888 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:21.154 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:21.154 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:21.154 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.154 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.154 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:21.154 [138/268] Linking static target lib/librte_net.a 00:01:21.154 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:21.154 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:21.418 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:21.418 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:21.418 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:21.418 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:21.418 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:21.418 [146/268] Linking static target lib/librte_cmdline.a 00:01:21.418 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:21.418 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:21.418 [149/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:21.418 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:21.418 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:21.677 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:21.677 [153/268] Linking static target lib/librte_timer.a 00:01:21.677 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.677 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:21.677 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:21.677 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.677 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:21.677 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:21.677 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:21.677 [161/268] Linking static target lib/librte_dmadev.a 00:01:21.677 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:21.677 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:21.935 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:21.935 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:21.935 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:21.935 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:21.935 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:21.935 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.935 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:21.935 [171/268] Linking static target lib/librte_power.a 00:01:21.935 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:21.935 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.935 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:21.935 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:21.935 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:21.935 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:21.935 [178/268] Linking static target lib/librte_hash.a 00:01:21.935 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:22.192 [180/268] Linking static target lib/librte_compressdev.a 00:01:22.192 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:22.192 [182/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.192 [183/268] Linking static target lib/librte_mbuf.a 00:01:22.192 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.192 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:22.192 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:22.192 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.192 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:22.192 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:22.192 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:22.192 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.451 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:22.451 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:22.451 [194/268] Linking static target lib/librte_security.a 00:01:22.451 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:22.451 [196/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:22.451 [197/268] Linking static target lib/librte_reorder.a 00:01:22.451 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.451 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.451 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:22.451 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.451 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.451 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:22.451 [204/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.451 [205/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.451 [206/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:22.451 [207/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:22.451 [208/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.451 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:22.451 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:22.709 [211/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.709 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.709 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.709 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:22.709 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.709 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:22.709 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.709 [218/268] Linking static target lib/librte_ethdev.a 00:01:22.709 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.709 [220/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:22.709 [221/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.709 [222/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.709 [223/268] Linking static target drivers/librte_mempool_ring.a 00:01:22.709 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:22.709 [225/268] Linking static target lib/librte_cryptodev.a 00:01:22.975 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.909 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.279 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:27.175 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.175 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.175 [231/268] Linking target lib/librte_eal.so.24.1 00:01:27.175 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:27.175 [233/268] Linking target lib/librte_timer.so.24.1 00:01:27.175 [234/268] Linking target lib/librte_ring.so.24.1 00:01:27.175 [235/268] Linking target lib/librte_meter.so.24.1 00:01:27.175 [236/268] Linking target lib/librte_pci.so.24.1 00:01:27.175 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:27.175 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:27.433 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:27.433 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:27.433 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:27.433 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:27.433 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:27.433 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:27.433 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:27.433 [246/268] Linking target lib/librte_mempool.so.24.1 00:01:27.433 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:27.433 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:27.691 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:27.691 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:27.691 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:27.691 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:27.691 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:27.691 [254/268] Linking target lib/librte_net.so.24.1 00:01:27.691 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:27.949 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:27.949 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:27.949 [258/268] Linking target lib/librte_hash.so.24.1 00:01:27.949 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:27.949 [260/268] Linking target lib/librte_security.so.24.1 00:01:27.949 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:27.949 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:27.949 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:28.207 [264/268] Linking target lib/librte_power.so.24.1 00:01:30.114 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:30.373 [266/268] Linking static target lib/librte_vhost.a 00:01:31.308 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.309 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:31.309 INFO: autodetecting backend as ninja 00:01:31.309 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:32.268 CC lib/ut/ut.o 00:01:32.268 CC lib/log/log.o 00:01:32.268 CC lib/log/log_flags.o 00:01:32.268 CC lib/log/log_deprecated.o 00:01:32.268 CC lib/ut_mock/mock.o 00:01:32.526 LIB libspdk_log.a 00:01:32.526 LIB libspdk_ut.a 00:01:32.526 LIB libspdk_ut_mock.a 00:01:32.526 SO libspdk_ut.so.2.0 00:01:32.526 SO libspdk_log.so.7.0 00:01:32.526 SO libspdk_ut_mock.so.6.0 00:01:32.526 SYMLINK libspdk_ut.so 00:01:32.526 SYMLINK libspdk_ut_mock.so 00:01:32.526 SYMLINK libspdk_log.so 00:01:32.784 CXX lib/trace_parser/trace.o 00:01:32.784 CC lib/dma/dma.o 00:01:32.784 CC lib/ioat/ioat.o 00:01:32.784 CC lib/util/base64.o 00:01:32.784 CC lib/util/bit_array.o 00:01:32.784 CC lib/util/cpuset.o 00:01:32.784 CC lib/util/crc16.o 00:01:32.784 CC lib/util/crc32.o 00:01:32.784 CC lib/util/crc32c.o 00:01:32.784 CC lib/util/crc32_ieee.o 00:01:32.784 CC lib/util/crc64.o 00:01:32.784 CC lib/util/dif.o 00:01:32.784 CC lib/util/fd.o 00:01:32.784 CC lib/util/file.o 00:01:32.784 CC lib/util/hexlify.o 00:01:32.784 CC lib/util/iov.o 00:01:32.784 CC lib/util/math.o 00:01:32.784 CC lib/util/pipe.o 00:01:32.784 CC lib/util/strerror_tls.o 00:01:32.784 CC lib/util/string.o 00:01:32.784 CC lib/util/uuid.o 00:01:32.784 CC lib/util/fd_group.o 00:01:32.784 CC lib/util/xor.o 00:01:32.784 CC lib/util/zipf.o 00:01:32.784 CC lib/vfio_user/host/vfio_user_pci.o 00:01:32.784 CC lib/vfio_user/host/vfio_user.o 00:01:33.041 LIB libspdk_dma.a 00:01:33.041 SO libspdk_dma.so.4.0 00:01:33.041 SYMLINK libspdk_dma.so 00:01:33.041 LIB libspdk_ioat.a 00:01:33.041 SO libspdk_ioat.so.7.0 00:01:33.041 SYMLINK libspdk_ioat.so 00:01:33.041 LIB libspdk_vfio_user.a 00:01:33.041 SO libspdk_vfio_user.so.5.0 00:01:33.299 SYMLINK libspdk_vfio_user.so 00:01:33.299 LIB libspdk_util.a 00:01:33.299 SO libspdk_util.so.9.1 00:01:33.558 SYMLINK libspdk_util.so 00:01:33.558 CC lib/conf/conf.o 00:01:33.558 CC lib/vmd/vmd.o 00:01:33.558 CC lib/rdma_utils/rdma_utils.o 00:01:33.558 CC lib/rdma_provider/common.o 00:01:33.558 CC lib/idxd/idxd.o 00:01:33.558 CC lib/env_dpdk/env.o 00:01:33.558 CC lib/json/json_parse.o 00:01:33.558 CC lib/vmd/led.o 00:01:33.558 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:33.558 CC lib/idxd/idxd_user.o 00:01:33.558 CC lib/json/json_util.o 00:01:33.558 CC lib/env_dpdk/memory.o 00:01:33.558 CC lib/idxd/idxd_kernel.o 00:01:33.558 CC lib/env_dpdk/pci.o 00:01:33.558 CC lib/json/json_write.o 00:01:33.558 CC lib/env_dpdk/init.o 00:01:33.558 CC lib/env_dpdk/threads.o 00:01:33.558 CC lib/env_dpdk/pci_ioat.o 00:01:33.558 CC lib/env_dpdk/pci_virtio.o 00:01:33.558 CC lib/env_dpdk/pci_vmd.o 00:01:33.558 CC lib/env_dpdk/pci_idxd.o 00:01:33.558 CC lib/env_dpdk/pci_event.o 00:01:33.558 CC lib/env_dpdk/sigbus_handler.o 00:01:33.558 CC lib/env_dpdk/pci_dpdk.o 00:01:33.558 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:33.558 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:33.817 LIB libspdk_trace_parser.a 00:01:33.817 SO libspdk_trace_parser.so.5.0 00:01:33.817 LIB libspdk_conf.a 00:01:33.817 LIB libspdk_rdma_provider.a 00:01:33.817 SYMLINK libspdk_trace_parser.so 00:01:33.817 SO libspdk_conf.so.6.0 00:01:33.817 SO libspdk_rdma_provider.so.6.0 00:01:34.075 LIB libspdk_rdma_utils.a 00:01:34.075 SYMLINK libspdk_conf.so 00:01:34.075 LIB libspdk_json.a 00:01:34.075 SO libspdk_rdma_utils.so.1.0 00:01:34.075 SYMLINK libspdk_rdma_provider.so 00:01:34.075 SO libspdk_json.so.6.0 00:01:34.075 SYMLINK libspdk_rdma_utils.so 00:01:34.075 SYMLINK libspdk_json.so 00:01:34.075 CC lib/jsonrpc/jsonrpc_server.o 00:01:34.075 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:34.075 CC lib/jsonrpc/jsonrpc_client.o 00:01:34.334 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:34.334 LIB libspdk_idxd.a 00:01:34.334 SO libspdk_idxd.so.12.0 00:01:34.334 LIB libspdk_vmd.a 00:01:34.334 SO libspdk_vmd.so.6.0 00:01:34.334 SYMLINK libspdk_idxd.so 00:01:34.334 SYMLINK libspdk_vmd.so 00:01:34.592 LIB libspdk_jsonrpc.a 00:01:34.592 SO libspdk_jsonrpc.so.6.0 00:01:34.592 SYMLINK libspdk_jsonrpc.so 00:01:34.850 CC lib/rpc/rpc.o 00:01:34.850 LIB libspdk_rpc.a 00:01:34.850 SO libspdk_rpc.so.6.0 00:01:35.107 SYMLINK libspdk_rpc.so 00:01:35.107 CC lib/keyring/keyring.o 00:01:35.107 CC lib/keyring/keyring_rpc.o 00:01:35.107 CC lib/trace/trace.o 00:01:35.107 CC lib/trace/trace_flags.o 00:01:35.107 CC lib/trace/trace_rpc.o 00:01:35.107 CC lib/notify/notify.o 00:01:35.107 CC lib/notify/notify_rpc.o 00:01:35.365 LIB libspdk_notify.a 00:01:35.365 SO libspdk_notify.so.6.0 00:01:35.365 LIB libspdk_keyring.a 00:01:35.365 SYMLINK libspdk_notify.so 00:01:35.365 LIB libspdk_trace.a 00:01:35.365 SO libspdk_keyring.so.1.0 00:01:35.365 SO libspdk_trace.so.10.0 00:01:35.623 SYMLINK libspdk_keyring.so 00:01:35.623 SYMLINK libspdk_trace.so 00:01:35.623 CC lib/thread/thread.o 00:01:35.623 CC lib/thread/iobuf.o 00:01:35.623 CC lib/sock/sock.o 00:01:35.623 CC lib/sock/sock_rpc.o 00:01:35.623 LIB libspdk_env_dpdk.a 00:01:35.912 SO libspdk_env_dpdk.so.14.1 00:01:35.912 SYMLINK libspdk_env_dpdk.so 00:01:36.171 LIB libspdk_sock.a 00:01:36.171 SO libspdk_sock.so.10.0 00:01:36.171 SYMLINK libspdk_sock.so 00:01:36.429 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:36.429 CC lib/nvme/nvme_ctrlr.o 00:01:36.429 CC lib/nvme/nvme_fabric.o 00:01:36.429 CC lib/nvme/nvme_ns_cmd.o 00:01:36.429 CC lib/nvme/nvme_ns.o 00:01:36.429 CC lib/nvme/nvme_pcie_common.o 00:01:36.429 CC lib/nvme/nvme_pcie.o 00:01:36.429 CC lib/nvme/nvme_qpair.o 00:01:36.429 CC lib/nvme/nvme.o 00:01:36.429 CC lib/nvme/nvme_quirks.o 00:01:36.429 CC lib/nvme/nvme_transport.o 00:01:36.429 CC lib/nvme/nvme_discovery.o 00:01:36.429 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:36.429 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:36.429 CC lib/nvme/nvme_tcp.o 00:01:36.429 CC lib/nvme/nvme_opal.o 00:01:36.429 CC lib/nvme/nvme_io_msg.o 00:01:36.429 CC lib/nvme/nvme_poll_group.o 00:01:36.429 CC lib/nvme/nvme_zns.o 00:01:36.429 CC lib/nvme/nvme_stubs.o 00:01:36.429 CC lib/nvme/nvme_auth.o 00:01:36.429 CC lib/nvme/nvme_cuse.o 00:01:36.429 CC lib/nvme/nvme_vfio_user.o 00:01:36.429 CC lib/nvme/nvme_rdma.o 00:01:37.365 LIB libspdk_thread.a 00:01:37.365 SO libspdk_thread.so.10.1 00:01:37.365 SYMLINK libspdk_thread.so 00:01:37.623 CC lib/vfu_tgt/tgt_endpoint.o 00:01:37.624 CC lib/blob/blobstore.o 00:01:37.624 CC lib/accel/accel.o 00:01:37.624 CC lib/init/json_config.o 00:01:37.624 CC lib/vfu_tgt/tgt_rpc.o 00:01:37.624 CC lib/virtio/virtio.o 00:01:37.624 CC lib/accel/accel_rpc.o 00:01:37.624 CC lib/blob/request.o 00:01:37.624 CC lib/virtio/virtio_vhost_user.o 00:01:37.624 CC lib/init/subsystem.o 00:01:37.624 CC lib/accel/accel_sw.o 00:01:37.624 CC lib/blob/zeroes.o 00:01:37.624 CC lib/init/subsystem_rpc.o 00:01:37.624 CC lib/virtio/virtio_vfio_user.o 00:01:37.624 CC lib/init/rpc.o 00:01:37.624 CC lib/virtio/virtio_pci.o 00:01:37.624 CC lib/blob/blob_bs_dev.o 00:01:37.882 LIB libspdk_init.a 00:01:37.882 SO libspdk_init.so.5.0 00:01:37.882 LIB libspdk_virtio.a 00:01:37.882 LIB libspdk_vfu_tgt.a 00:01:37.882 SYMLINK libspdk_init.so 00:01:37.882 SO libspdk_vfu_tgt.so.3.0 00:01:37.882 SO libspdk_virtio.so.7.0 00:01:37.882 SYMLINK libspdk_vfu_tgt.so 00:01:37.882 SYMLINK libspdk_virtio.so 00:01:38.140 CC lib/event/app.o 00:01:38.140 CC lib/event/reactor.o 00:01:38.140 CC lib/event/log_rpc.o 00:01:38.140 CC lib/event/app_rpc.o 00:01:38.140 CC lib/event/scheduler_static.o 00:01:38.398 LIB libspdk_event.a 00:01:38.398 SO libspdk_event.so.14.0 00:01:38.656 LIB libspdk_accel.a 00:01:38.656 SYMLINK libspdk_event.so 00:01:38.656 SO libspdk_accel.so.15.1 00:01:38.656 SYMLINK libspdk_accel.so 00:01:38.656 LIB libspdk_nvme.a 00:01:38.915 CC lib/bdev/bdev.o 00:01:38.915 CC lib/bdev/bdev_rpc.o 00:01:38.915 CC lib/bdev/bdev_zone.o 00:01:38.915 CC lib/bdev/part.o 00:01:38.915 CC lib/bdev/scsi_nvme.o 00:01:38.915 SO libspdk_nvme.so.13.1 00:01:39.173 SYMLINK libspdk_nvme.so 00:01:40.547 LIB libspdk_blob.a 00:01:40.547 SO libspdk_blob.so.11.0 00:01:40.805 SYMLINK libspdk_blob.so 00:01:40.805 CC lib/lvol/lvol.o 00:01:40.805 CC lib/blobfs/blobfs.o 00:01:40.805 CC lib/blobfs/tree.o 00:01:41.371 LIB libspdk_bdev.a 00:01:41.372 SO libspdk_bdev.so.15.1 00:01:41.372 SYMLINK libspdk_bdev.so 00:01:41.637 CC lib/scsi/dev.o 00:01:41.637 CC lib/scsi/lun.o 00:01:41.637 CC lib/nbd/nbd.o 00:01:41.637 CC lib/nvmf/ctrlr.o 00:01:41.637 CC lib/ftl/ftl_core.o 00:01:41.637 CC lib/ublk/ublk.o 00:01:41.637 CC lib/scsi/port.o 00:01:41.637 CC lib/nvmf/ctrlr_discovery.o 00:01:41.637 CC lib/ftl/ftl_init.o 00:01:41.637 CC lib/nbd/nbd_rpc.o 00:01:41.637 CC lib/scsi/scsi.o 00:01:41.637 CC lib/ublk/ublk_rpc.o 00:01:41.637 CC lib/ftl/ftl_layout.o 00:01:41.637 CC lib/scsi/scsi_bdev.o 00:01:41.637 CC lib/nvmf/ctrlr_bdev.o 00:01:41.637 CC lib/ftl/ftl_debug.o 00:01:41.637 CC lib/scsi/scsi_pr.o 00:01:41.637 CC lib/scsi/scsi_rpc.o 00:01:41.637 CC lib/nvmf/subsystem.o 00:01:41.637 CC lib/ftl/ftl_sb.o 00:01:41.637 CC lib/ftl/ftl_io.o 00:01:41.637 CC lib/scsi/task.o 00:01:41.637 CC lib/nvmf/nvmf_rpc.o 00:01:41.637 CC lib/nvmf/nvmf.o 00:01:41.637 CC lib/ftl/ftl_l2p.o 00:01:41.637 CC lib/nvmf/transport.o 00:01:41.637 CC lib/ftl/ftl_l2p_flat.o 00:01:41.637 CC lib/nvmf/tcp.o 00:01:41.637 CC lib/ftl/ftl_nv_cache.o 00:01:41.637 CC lib/nvmf/stubs.o 00:01:41.637 CC lib/nvmf/mdns_server.o 00:01:41.637 CC lib/ftl/ftl_band.o 00:01:41.637 CC lib/ftl/ftl_band_ops.o 00:01:41.637 CC lib/nvmf/vfio_user.o 00:01:41.637 CC lib/ftl/ftl_rq.o 00:01:41.637 CC lib/nvmf/rdma.o 00:01:41.637 CC lib/ftl/ftl_writer.o 00:01:41.637 CC lib/nvmf/auth.o 00:01:41.637 CC lib/ftl/ftl_reloc.o 00:01:41.637 CC lib/ftl/ftl_l2p_cache.o 00:01:41.637 CC lib/ftl/ftl_p2l.o 00:01:41.637 CC lib/ftl/mngt/ftl_mngt.o 00:01:41.637 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:41.637 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:41.637 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:41.637 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:41.900 LIB libspdk_lvol.a 00:01:41.900 LIB libspdk_blobfs.a 00:01:41.900 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:41.900 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:41.900 SO libspdk_lvol.so.10.0 00:01:41.900 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:41.900 SO libspdk_blobfs.so.10.0 00:01:41.900 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:41.900 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:41.900 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:41.900 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:42.159 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:42.159 CC lib/ftl/utils/ftl_conf.o 00:01:42.159 SYMLINK libspdk_lvol.so 00:01:42.159 CC lib/ftl/utils/ftl_md.o 00:01:42.159 CC lib/ftl/utils/ftl_mempool.o 00:01:42.159 CC lib/ftl/utils/ftl_bitmap.o 00:01:42.159 CC lib/ftl/utils/ftl_property.o 00:01:42.159 SYMLINK libspdk_blobfs.so 00:01:42.159 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:42.159 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:42.159 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:42.159 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:42.159 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:42.159 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:42.159 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:42.159 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:42.159 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:42.421 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:42.421 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:42.421 CC lib/ftl/base/ftl_base_dev.o 00:01:42.421 CC lib/ftl/base/ftl_base_bdev.o 00:01:42.421 CC lib/ftl/ftl_trace.o 00:01:42.421 LIB libspdk_nbd.a 00:01:42.421 SO libspdk_nbd.so.7.0 00:01:42.421 LIB libspdk_scsi.a 00:01:42.421 SYMLINK libspdk_nbd.so 00:01:42.680 SO libspdk_scsi.so.9.0 00:01:42.680 LIB libspdk_ublk.a 00:01:42.680 SYMLINK libspdk_scsi.so 00:01:42.680 SO libspdk_ublk.so.3.0 00:01:42.680 SYMLINK libspdk_ublk.so 00:01:42.939 CC lib/iscsi/conn.o 00:01:42.939 CC lib/vhost/vhost.o 00:01:42.939 CC lib/iscsi/init_grp.o 00:01:42.939 CC lib/vhost/vhost_rpc.o 00:01:42.939 CC lib/iscsi/iscsi.o 00:01:42.939 CC lib/vhost/vhost_scsi.o 00:01:42.939 CC lib/iscsi/md5.o 00:01:42.939 CC lib/vhost/vhost_blk.o 00:01:42.939 CC lib/iscsi/param.o 00:01:42.939 CC lib/vhost/rte_vhost_user.o 00:01:42.939 CC lib/iscsi/portal_grp.o 00:01:42.939 CC lib/iscsi/tgt_node.o 00:01:42.939 CC lib/iscsi/iscsi_subsystem.o 00:01:42.939 CC lib/iscsi/iscsi_rpc.o 00:01:42.939 CC lib/iscsi/task.o 00:01:42.939 LIB libspdk_ftl.a 00:01:43.197 SO libspdk_ftl.so.9.0 00:01:43.455 SYMLINK libspdk_ftl.so 00:01:44.022 LIB libspdk_vhost.a 00:01:44.022 SO libspdk_vhost.so.8.0 00:01:44.280 LIB libspdk_nvmf.a 00:01:44.280 SYMLINK libspdk_vhost.so 00:01:44.280 SO libspdk_nvmf.so.18.1 00:01:44.280 LIB libspdk_iscsi.a 00:01:44.280 SO libspdk_iscsi.so.8.0 00:01:44.538 SYMLINK libspdk_nvmf.so 00:01:44.538 SYMLINK libspdk_iscsi.so 00:01:44.796 CC module/env_dpdk/env_dpdk_rpc.o 00:01:44.796 CC module/vfu_device/vfu_virtio.o 00:01:44.796 CC module/vfu_device/vfu_virtio_blk.o 00:01:44.796 CC module/vfu_device/vfu_virtio_scsi.o 00:01:44.796 CC module/vfu_device/vfu_virtio_rpc.o 00:01:44.796 CC module/sock/posix/posix.o 00:01:44.796 CC module/keyring/linux/keyring.o 00:01:44.796 CC module/scheduler/gscheduler/gscheduler.o 00:01:44.796 CC module/keyring/file/keyring.o 00:01:44.796 CC module/accel/ioat/accel_ioat.o 00:01:44.796 CC module/accel/iaa/accel_iaa.o 00:01:44.796 CC module/keyring/linux/keyring_rpc.o 00:01:44.796 CC module/accel/error/accel_error.o 00:01:44.796 CC module/accel/ioat/accel_ioat_rpc.o 00:01:44.796 CC module/keyring/file/keyring_rpc.o 00:01:44.796 CC module/accel/iaa/accel_iaa_rpc.o 00:01:44.796 CC module/accel/error/accel_error_rpc.o 00:01:44.796 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:44.796 CC module/accel/dsa/accel_dsa.o 00:01:44.796 CC module/accel/dsa/accel_dsa_rpc.o 00:01:44.796 CC module/blob/bdev/blob_bdev.o 00:01:44.796 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:45.054 LIB libspdk_env_dpdk_rpc.a 00:01:45.054 SO libspdk_env_dpdk_rpc.so.6.0 00:01:45.054 SYMLINK libspdk_env_dpdk_rpc.so 00:01:45.054 LIB libspdk_keyring_linux.a 00:01:45.054 LIB libspdk_keyring_file.a 00:01:45.054 LIB libspdk_scheduler_gscheduler.a 00:01:45.054 LIB libspdk_scheduler_dpdk_governor.a 00:01:45.054 SO libspdk_keyring_linux.so.1.0 00:01:45.054 SO libspdk_keyring_file.so.1.0 00:01:45.054 SO libspdk_scheduler_gscheduler.so.4.0 00:01:45.054 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:45.054 LIB libspdk_accel_ioat.a 00:01:45.054 LIB libspdk_scheduler_dynamic.a 00:01:45.054 LIB libspdk_accel_iaa.a 00:01:45.054 LIB libspdk_accel_error.a 00:01:45.054 SO libspdk_scheduler_dynamic.so.4.0 00:01:45.054 SO libspdk_accel_ioat.so.6.0 00:01:45.054 SYMLINK libspdk_keyring_linux.so 00:01:45.054 SYMLINK libspdk_keyring_file.so 00:01:45.054 SYMLINK libspdk_scheduler_gscheduler.so 00:01:45.054 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:45.054 SO libspdk_accel_iaa.so.3.0 00:01:45.054 SO libspdk_accel_error.so.2.0 00:01:45.054 LIB libspdk_accel_dsa.a 00:01:45.054 SYMLINK libspdk_scheduler_dynamic.so 00:01:45.311 LIB libspdk_blob_bdev.a 00:01:45.311 SYMLINK libspdk_accel_ioat.so 00:01:45.311 SO libspdk_accel_dsa.so.5.0 00:01:45.311 SYMLINK libspdk_accel_error.so 00:01:45.311 SYMLINK libspdk_accel_iaa.so 00:01:45.311 SO libspdk_blob_bdev.so.11.0 00:01:45.311 SYMLINK libspdk_blob_bdev.so 00:01:45.311 SYMLINK libspdk_accel_dsa.so 00:01:45.574 LIB libspdk_vfu_device.a 00:01:45.574 SO libspdk_vfu_device.so.3.0 00:01:45.574 CC module/bdev/null/bdev_null.o 00:01:45.574 CC module/bdev/error/vbdev_error.o 00:01:45.574 CC module/bdev/null/bdev_null_rpc.o 00:01:45.574 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:45.574 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:45.574 CC module/blobfs/bdev/blobfs_bdev.o 00:01:45.574 CC module/bdev/passthru/vbdev_passthru.o 00:01:45.574 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:45.574 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:45.574 CC module/bdev/lvol/vbdev_lvol.o 00:01:45.574 CC module/bdev/malloc/bdev_malloc.o 00:01:45.574 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:45.574 CC module/bdev/gpt/gpt.o 00:01:45.574 CC module/bdev/error/vbdev_error_rpc.o 00:01:45.574 CC module/bdev/delay/vbdev_delay.o 00:01:45.574 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:45.574 CC module/bdev/ftl/bdev_ftl.o 00:01:45.574 CC module/bdev/nvme/bdev_nvme.o 00:01:45.574 CC module/bdev/gpt/vbdev_gpt.o 00:01:45.574 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:45.574 CC module/bdev/raid/bdev_raid.o 00:01:45.574 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:45.574 CC module/bdev/raid/bdev_raid_rpc.o 00:01:45.574 CC module/bdev/split/vbdev_split.o 00:01:45.574 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:45.574 CC module/bdev/nvme/nvme_rpc.o 00:01:45.574 CC module/bdev/raid/bdev_raid_sb.o 00:01:45.574 CC module/bdev/nvme/bdev_mdns_client.o 00:01:45.574 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:45.574 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:45.574 CC module/bdev/iscsi/bdev_iscsi.o 00:01:45.574 CC module/bdev/nvme/vbdev_opal.o 00:01:45.574 CC module/bdev/split/vbdev_split_rpc.o 00:01:45.574 CC module/bdev/raid/raid0.o 00:01:45.574 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:45.574 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:45.574 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:45.574 CC module/bdev/raid/raid1.o 00:01:45.574 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:45.574 CC module/bdev/raid/concat.o 00:01:45.574 CC module/bdev/aio/bdev_aio.o 00:01:45.574 CC module/bdev/aio/bdev_aio_rpc.o 00:01:45.574 SYMLINK libspdk_vfu_device.so 00:01:45.832 LIB libspdk_sock_posix.a 00:01:45.832 SO libspdk_sock_posix.so.6.0 00:01:45.832 LIB libspdk_blobfs_bdev.a 00:01:45.832 SO libspdk_blobfs_bdev.so.6.0 00:01:45.832 SYMLINK libspdk_sock_posix.so 00:01:45.832 LIB libspdk_bdev_null.a 00:01:45.832 LIB libspdk_bdev_split.a 00:01:46.090 SYMLINK libspdk_blobfs_bdev.so 00:01:46.091 LIB libspdk_bdev_passthru.a 00:01:46.091 LIB libspdk_bdev_error.a 00:01:46.091 SO libspdk_bdev_null.so.6.0 00:01:46.091 SO libspdk_bdev_split.so.6.0 00:01:46.091 SO libspdk_bdev_passthru.so.6.0 00:01:46.091 SO libspdk_bdev_error.so.6.0 00:01:46.091 LIB libspdk_bdev_gpt.a 00:01:46.091 SYMLINK libspdk_bdev_split.so 00:01:46.091 SYMLINK libspdk_bdev_null.so 00:01:46.091 SO libspdk_bdev_gpt.so.6.0 00:01:46.091 LIB libspdk_bdev_ftl.a 00:01:46.091 SYMLINK libspdk_bdev_passthru.so 00:01:46.091 SYMLINK libspdk_bdev_error.so 00:01:46.091 SO libspdk_bdev_ftl.so.6.0 00:01:46.091 LIB libspdk_bdev_aio.a 00:01:46.091 LIB libspdk_bdev_malloc.a 00:01:46.091 LIB libspdk_bdev_zone_block.a 00:01:46.091 SYMLINK libspdk_bdev_gpt.so 00:01:46.091 LIB libspdk_bdev_iscsi.a 00:01:46.091 SO libspdk_bdev_malloc.so.6.0 00:01:46.091 SO libspdk_bdev_aio.so.6.0 00:01:46.091 SO libspdk_bdev_zone_block.so.6.0 00:01:46.091 SYMLINK libspdk_bdev_ftl.so 00:01:46.091 SO libspdk_bdev_iscsi.so.6.0 00:01:46.091 LIB libspdk_bdev_delay.a 00:01:46.091 SYMLINK libspdk_bdev_aio.so 00:01:46.091 SYMLINK libspdk_bdev_malloc.so 00:01:46.091 SO libspdk_bdev_delay.so.6.0 00:01:46.091 SYMLINK libspdk_bdev_zone_block.so 00:01:46.091 SYMLINK libspdk_bdev_iscsi.so 00:01:46.091 LIB libspdk_bdev_lvol.a 00:01:46.350 SYMLINK libspdk_bdev_delay.so 00:01:46.350 SO libspdk_bdev_lvol.so.6.0 00:01:46.350 SYMLINK libspdk_bdev_lvol.so 00:01:46.350 LIB libspdk_bdev_virtio.a 00:01:46.350 SO libspdk_bdev_virtio.so.6.0 00:01:46.350 SYMLINK libspdk_bdev_virtio.so 00:01:46.609 LIB libspdk_bdev_raid.a 00:01:46.609 SO libspdk_bdev_raid.so.6.0 00:01:46.868 SYMLINK libspdk_bdev_raid.so 00:01:47.807 LIB libspdk_bdev_nvme.a 00:01:47.807 SO libspdk_bdev_nvme.so.7.0 00:01:47.807 SYMLINK libspdk_bdev_nvme.so 00:01:48.375 CC module/event/subsystems/sock/sock.o 00:01:48.375 CC module/event/subsystems/scheduler/scheduler.o 00:01:48.375 CC module/event/subsystems/iobuf/iobuf.o 00:01:48.375 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:48.375 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:48.375 CC module/event/subsystems/keyring/keyring.o 00:01:48.375 CC module/event/subsystems/vmd/vmd.o 00:01:48.375 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:48.375 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:48.375 LIB libspdk_event_keyring.a 00:01:48.375 LIB libspdk_event_vhost_blk.a 00:01:48.375 LIB libspdk_event_vfu_tgt.a 00:01:48.375 LIB libspdk_event_vmd.a 00:01:48.375 LIB libspdk_event_scheduler.a 00:01:48.375 LIB libspdk_event_sock.a 00:01:48.375 LIB libspdk_event_iobuf.a 00:01:48.375 SO libspdk_event_keyring.so.1.0 00:01:48.375 SO libspdk_event_vhost_blk.so.3.0 00:01:48.375 SO libspdk_event_scheduler.so.4.0 00:01:48.375 SO libspdk_event_vfu_tgt.so.3.0 00:01:48.375 SO libspdk_event_sock.so.5.0 00:01:48.375 SO libspdk_event_vmd.so.6.0 00:01:48.375 SO libspdk_event_iobuf.so.3.0 00:01:48.375 SYMLINK libspdk_event_keyring.so 00:01:48.375 SYMLINK libspdk_event_vhost_blk.so 00:01:48.375 SYMLINK libspdk_event_vfu_tgt.so 00:01:48.375 SYMLINK libspdk_event_sock.so 00:01:48.375 SYMLINK libspdk_event_scheduler.so 00:01:48.375 SYMLINK libspdk_event_vmd.so 00:01:48.375 SYMLINK libspdk_event_iobuf.so 00:01:48.634 CC module/event/subsystems/accel/accel.o 00:01:48.893 LIB libspdk_event_accel.a 00:01:48.893 SO libspdk_event_accel.so.6.0 00:01:48.893 SYMLINK libspdk_event_accel.so 00:01:49.151 CC module/event/subsystems/bdev/bdev.o 00:01:49.151 LIB libspdk_event_bdev.a 00:01:49.151 SO libspdk_event_bdev.so.6.0 00:01:49.409 SYMLINK libspdk_event_bdev.so 00:01:49.409 CC module/event/subsystems/nbd/nbd.o 00:01:49.409 CC module/event/subsystems/ublk/ublk.o 00:01:49.409 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:49.409 CC module/event/subsystems/scsi/scsi.o 00:01:49.409 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:49.667 LIB libspdk_event_nbd.a 00:01:49.667 LIB libspdk_event_ublk.a 00:01:49.667 LIB libspdk_event_scsi.a 00:01:49.667 SO libspdk_event_ublk.so.3.0 00:01:49.667 SO libspdk_event_nbd.so.6.0 00:01:49.667 SO libspdk_event_scsi.so.6.0 00:01:49.667 SYMLINK libspdk_event_ublk.so 00:01:49.667 SYMLINK libspdk_event_nbd.so 00:01:49.667 SYMLINK libspdk_event_scsi.so 00:01:49.667 LIB libspdk_event_nvmf.a 00:01:49.667 SO libspdk_event_nvmf.so.6.0 00:01:49.667 SYMLINK libspdk_event_nvmf.so 00:01:49.925 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:49.925 CC module/event/subsystems/iscsi/iscsi.o 00:01:49.925 LIB libspdk_event_vhost_scsi.a 00:01:49.925 LIB libspdk_event_iscsi.a 00:01:49.925 SO libspdk_event_vhost_scsi.so.3.0 00:01:50.183 SO libspdk_event_iscsi.so.6.0 00:01:50.183 SYMLINK libspdk_event_vhost_scsi.so 00:01:50.183 SYMLINK libspdk_event_iscsi.so 00:01:50.183 SO libspdk.so.6.0 00:01:50.184 SYMLINK libspdk.so 00:01:50.450 CC app/trace_record/trace_record.o 00:01:50.450 CC app/spdk_top/spdk_top.o 00:01:50.450 CXX app/trace/trace.o 00:01:50.450 TEST_HEADER include/spdk/accel_module.h 00:01:50.450 TEST_HEADER include/spdk/accel.h 00:01:50.450 TEST_HEADER include/spdk/assert.h 00:01:50.450 CC app/spdk_nvme_discover/discovery_aer.o 00:01:50.450 TEST_HEADER include/spdk/barrier.h 00:01:50.450 TEST_HEADER include/spdk/base64.h 00:01:50.450 CC app/spdk_lspci/spdk_lspci.o 00:01:50.450 CC app/spdk_nvme_perf/perf.o 00:01:50.450 TEST_HEADER include/spdk/bdev.h 00:01:50.450 TEST_HEADER include/spdk/bdev_module.h 00:01:50.450 CC app/spdk_nvme_identify/identify.o 00:01:50.450 TEST_HEADER include/spdk/bdev_zone.h 00:01:50.450 CC test/rpc_client/rpc_client_test.o 00:01:50.450 TEST_HEADER include/spdk/bit_array.h 00:01:50.450 TEST_HEADER include/spdk/bit_pool.h 00:01:50.450 TEST_HEADER include/spdk/blob_bdev.h 00:01:50.450 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:50.450 TEST_HEADER include/spdk/blobfs.h 00:01:50.450 TEST_HEADER include/spdk/blob.h 00:01:50.450 TEST_HEADER include/spdk/conf.h 00:01:50.450 TEST_HEADER include/spdk/config.h 00:01:50.450 TEST_HEADER include/spdk/cpuset.h 00:01:50.450 TEST_HEADER include/spdk/crc16.h 00:01:50.450 TEST_HEADER include/spdk/crc32.h 00:01:50.450 TEST_HEADER include/spdk/crc64.h 00:01:50.450 TEST_HEADER include/spdk/dif.h 00:01:50.450 TEST_HEADER include/spdk/dma.h 00:01:50.450 TEST_HEADER include/spdk/endian.h 00:01:50.450 TEST_HEADER include/spdk/env_dpdk.h 00:01:50.450 TEST_HEADER include/spdk/env.h 00:01:50.450 TEST_HEADER include/spdk/fd_group.h 00:01:50.450 TEST_HEADER include/spdk/event.h 00:01:50.450 TEST_HEADER include/spdk/fd.h 00:01:50.450 TEST_HEADER include/spdk/file.h 00:01:50.450 TEST_HEADER include/spdk/gpt_spec.h 00:01:50.450 TEST_HEADER include/spdk/ftl.h 00:01:50.450 TEST_HEADER include/spdk/hexlify.h 00:01:50.450 TEST_HEADER include/spdk/histogram_data.h 00:01:50.450 TEST_HEADER include/spdk/idxd_spec.h 00:01:50.450 TEST_HEADER include/spdk/idxd.h 00:01:50.450 TEST_HEADER include/spdk/init.h 00:01:50.450 TEST_HEADER include/spdk/ioat.h 00:01:50.450 TEST_HEADER include/spdk/ioat_spec.h 00:01:50.450 TEST_HEADER include/spdk/iscsi_spec.h 00:01:50.450 TEST_HEADER include/spdk/json.h 00:01:50.450 TEST_HEADER include/spdk/jsonrpc.h 00:01:50.450 TEST_HEADER include/spdk/keyring.h 00:01:50.450 TEST_HEADER include/spdk/keyring_module.h 00:01:50.450 TEST_HEADER include/spdk/likely.h 00:01:50.450 TEST_HEADER include/spdk/log.h 00:01:50.450 TEST_HEADER include/spdk/lvol.h 00:01:50.450 TEST_HEADER include/spdk/memory.h 00:01:50.450 TEST_HEADER include/spdk/mmio.h 00:01:50.450 TEST_HEADER include/spdk/nbd.h 00:01:50.450 TEST_HEADER include/spdk/notify.h 00:01:50.450 TEST_HEADER include/spdk/nvme.h 00:01:50.450 TEST_HEADER include/spdk/nvme_intel.h 00:01:50.450 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:50.450 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:50.450 TEST_HEADER include/spdk/nvme_zns.h 00:01:50.450 TEST_HEADER include/spdk/nvme_spec.h 00:01:50.450 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:50.450 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:50.450 TEST_HEADER include/spdk/nvmf.h 00:01:50.450 TEST_HEADER include/spdk/nvmf_spec.h 00:01:50.450 TEST_HEADER include/spdk/nvmf_transport.h 00:01:50.450 TEST_HEADER include/spdk/opal.h 00:01:50.450 TEST_HEADER include/spdk/opal_spec.h 00:01:50.450 TEST_HEADER include/spdk/pci_ids.h 00:01:50.450 TEST_HEADER include/spdk/pipe.h 00:01:50.450 TEST_HEADER include/spdk/queue.h 00:01:50.450 TEST_HEADER include/spdk/reduce.h 00:01:50.450 TEST_HEADER include/spdk/rpc.h 00:01:50.450 TEST_HEADER include/spdk/scheduler.h 00:01:50.450 TEST_HEADER include/spdk/scsi.h 00:01:50.450 TEST_HEADER include/spdk/scsi_spec.h 00:01:50.450 TEST_HEADER include/spdk/sock.h 00:01:50.450 TEST_HEADER include/spdk/stdinc.h 00:01:50.450 TEST_HEADER include/spdk/string.h 00:01:50.450 TEST_HEADER include/spdk/thread.h 00:01:50.450 TEST_HEADER include/spdk/trace.h 00:01:50.450 TEST_HEADER include/spdk/trace_parser.h 00:01:50.450 TEST_HEADER include/spdk/tree.h 00:01:50.450 TEST_HEADER include/spdk/ublk.h 00:01:50.450 TEST_HEADER include/spdk/util.h 00:01:50.450 TEST_HEADER include/spdk/uuid.h 00:01:50.450 TEST_HEADER include/spdk/version.h 00:01:50.450 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:50.450 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:50.450 TEST_HEADER include/spdk/vhost.h 00:01:50.450 TEST_HEADER include/spdk/vmd.h 00:01:50.450 TEST_HEADER include/spdk/xor.h 00:01:50.450 TEST_HEADER include/spdk/zipf.h 00:01:50.450 CXX test/cpp_headers/accel.o 00:01:50.450 CXX test/cpp_headers/accel_module.o 00:01:50.450 CXX test/cpp_headers/assert.o 00:01:50.450 CXX test/cpp_headers/base64.o 00:01:50.450 CXX test/cpp_headers/barrier.o 00:01:50.450 CXX test/cpp_headers/bdev.o 00:01:50.450 CXX test/cpp_headers/bdev_module.o 00:01:50.450 CXX test/cpp_headers/bdev_zone.o 00:01:50.450 CXX test/cpp_headers/bit_array.o 00:01:50.450 CXX test/cpp_headers/bit_pool.o 00:01:50.450 CXX test/cpp_headers/blob_bdev.o 00:01:50.450 CC app/spdk_dd/spdk_dd.o 00:01:50.450 CXX test/cpp_headers/blobfs_bdev.o 00:01:50.450 CXX test/cpp_headers/blobfs.o 00:01:50.450 CXX test/cpp_headers/blob.o 00:01:50.450 CXX test/cpp_headers/conf.o 00:01:50.450 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:50.450 CXX test/cpp_headers/config.o 00:01:50.450 CXX test/cpp_headers/cpuset.o 00:01:50.450 CXX test/cpp_headers/crc16.o 00:01:50.450 CC app/iscsi_tgt/iscsi_tgt.o 00:01:50.450 CC app/nvmf_tgt/nvmf_main.o 00:01:50.450 CXX test/cpp_headers/crc32.o 00:01:50.450 CC examples/util/zipf/zipf.o 00:01:50.450 CC examples/ioat/perf/perf.o 00:01:50.450 CC app/spdk_tgt/spdk_tgt.o 00:01:50.450 CC examples/ioat/verify/verify.o 00:01:50.450 CC test/app/histogram_perf/histogram_perf.o 00:01:50.450 CC test/app/jsoncat/jsoncat.o 00:01:50.450 CC test/env/pci/pci_ut.o 00:01:50.450 CC app/fio/nvme/fio_plugin.o 00:01:50.450 CC test/app/stub/stub.o 00:01:50.450 CC test/env/vtophys/vtophys.o 00:01:50.450 CC test/env/memory/memory_ut.o 00:01:50.450 CC test/thread/poller_perf/poller_perf.o 00:01:50.450 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:50.724 CC test/dma/test_dma/test_dma.o 00:01:50.724 CC app/fio/bdev/fio_plugin.o 00:01:50.724 CC test/app/bdev_svc/bdev_svc.o 00:01:50.724 LINK spdk_lspci 00:01:50.724 CC test/env/mem_callbacks/mem_callbacks.o 00:01:50.724 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:50.724 LINK spdk_nvme_discover 00:01:50.724 LINK rpc_client_test 00:01:51.028 LINK jsoncat 00:01:51.028 LINK histogram_perf 00:01:51.028 LINK vtophys 00:01:51.028 LINK poller_perf 00:01:51.028 CXX test/cpp_headers/crc64.o 00:01:51.028 CXX test/cpp_headers/dif.o 00:01:51.028 CXX test/cpp_headers/dma.o 00:01:51.028 LINK zipf 00:01:51.028 LINK interrupt_tgt 00:01:51.028 CXX test/cpp_headers/endian.o 00:01:51.028 LINK env_dpdk_post_init 00:01:51.028 CXX test/cpp_headers/env_dpdk.o 00:01:51.028 CXX test/cpp_headers/env.o 00:01:51.028 CXX test/cpp_headers/event.o 00:01:51.028 CXX test/cpp_headers/fd_group.o 00:01:51.028 LINK nvmf_tgt 00:01:51.028 CXX test/cpp_headers/fd.o 00:01:51.028 CXX test/cpp_headers/file.o 00:01:51.028 CXX test/cpp_headers/ftl.o 00:01:51.028 CXX test/cpp_headers/gpt_spec.o 00:01:51.028 LINK stub 00:01:51.028 LINK spdk_trace_record 00:01:51.028 LINK iscsi_tgt 00:01:51.028 LINK ioat_perf 00:01:51.028 LINK spdk_tgt 00:01:51.028 CXX test/cpp_headers/hexlify.o 00:01:51.028 LINK verify 00:01:51.028 CXX test/cpp_headers/histogram_data.o 00:01:51.028 CXX test/cpp_headers/idxd.o 00:01:51.028 CXX test/cpp_headers/idxd_spec.o 00:01:51.028 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:51.028 CXX test/cpp_headers/init.o 00:01:51.028 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:51.028 LINK bdev_svc 00:01:51.028 CXX test/cpp_headers/ioat.o 00:01:51.028 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:51.296 CXX test/cpp_headers/ioat_spec.o 00:01:51.296 CXX test/cpp_headers/iscsi_spec.o 00:01:51.296 CXX test/cpp_headers/json.o 00:01:51.296 CXX test/cpp_headers/jsonrpc.o 00:01:51.296 CXX test/cpp_headers/keyring.o 00:01:51.296 LINK spdk_dd 00:01:51.296 LINK spdk_trace 00:01:51.296 CXX test/cpp_headers/keyring_module.o 00:01:51.296 CXX test/cpp_headers/likely.o 00:01:51.296 CXX test/cpp_headers/log.o 00:01:51.296 CXX test/cpp_headers/lvol.o 00:01:51.296 CXX test/cpp_headers/memory.o 00:01:51.296 CXX test/cpp_headers/mmio.o 00:01:51.296 CXX test/cpp_headers/nbd.o 00:01:51.296 CXX test/cpp_headers/notify.o 00:01:51.296 CXX test/cpp_headers/nvme.o 00:01:51.296 CXX test/cpp_headers/nvme_intel.o 00:01:51.296 CXX test/cpp_headers/nvme_ocssd.o 00:01:51.296 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:51.296 CXX test/cpp_headers/nvme_spec.o 00:01:51.296 LINK pci_ut 00:01:51.296 CXX test/cpp_headers/nvme_zns.o 00:01:51.296 CXX test/cpp_headers/nvmf_cmd.o 00:01:51.296 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:51.296 CXX test/cpp_headers/nvmf.o 00:01:51.296 LINK test_dma 00:01:51.296 CXX test/cpp_headers/nvmf_spec.o 00:01:51.296 CXX test/cpp_headers/nvmf_transport.o 00:01:51.296 CXX test/cpp_headers/opal.o 00:01:51.296 CXX test/cpp_headers/opal_spec.o 00:01:51.559 CXX test/cpp_headers/pci_ids.o 00:01:51.559 CXX test/cpp_headers/pipe.o 00:01:51.559 CXX test/cpp_headers/queue.o 00:01:51.559 CXX test/cpp_headers/reduce.o 00:01:51.559 LINK nvme_fuzz 00:01:51.559 CC test/event/event_perf/event_perf.o 00:01:51.559 LINK spdk_bdev 00:01:51.559 CC examples/vmd/lsvmd/lsvmd.o 00:01:51.559 CC test/event/reactor/reactor.o 00:01:51.559 CC examples/sock/hello_world/hello_sock.o 00:01:51.559 CC examples/idxd/perf/perf.o 00:01:51.559 CXX test/cpp_headers/rpc.o 00:01:51.559 CXX test/cpp_headers/scheduler.o 00:01:51.559 CC test/event/reactor_perf/reactor_perf.o 00:01:51.559 LINK spdk_nvme 00:01:51.559 CC examples/thread/thread/thread_ex.o 00:01:51.559 CXX test/cpp_headers/scsi.o 00:01:51.559 CC examples/vmd/led/led.o 00:01:51.818 CXX test/cpp_headers/scsi_spec.o 00:01:51.818 CXX test/cpp_headers/sock.o 00:01:51.818 CC test/event/app_repeat/app_repeat.o 00:01:51.818 CXX test/cpp_headers/stdinc.o 00:01:51.818 CXX test/cpp_headers/string.o 00:01:51.818 CXX test/cpp_headers/thread.o 00:01:51.818 CXX test/cpp_headers/trace.o 00:01:51.818 CXX test/cpp_headers/trace_parser.o 00:01:51.818 CXX test/cpp_headers/tree.o 00:01:51.818 CXX test/cpp_headers/ublk.o 00:01:51.818 CXX test/cpp_headers/util.o 00:01:51.818 CXX test/cpp_headers/uuid.o 00:01:51.818 CXX test/cpp_headers/version.o 00:01:51.818 CC test/event/scheduler/scheduler.o 00:01:51.818 CXX test/cpp_headers/vfio_user_pci.o 00:01:51.818 CXX test/cpp_headers/vfio_user_spec.o 00:01:51.818 CXX test/cpp_headers/vhost.o 00:01:51.818 CXX test/cpp_headers/vmd.o 00:01:51.818 CC app/vhost/vhost.o 00:01:51.818 CXX test/cpp_headers/xor.o 00:01:51.818 CXX test/cpp_headers/zipf.o 00:01:51.818 LINK lsvmd 00:01:51.818 LINK mem_callbacks 00:01:51.818 LINK event_perf 00:01:51.818 LINK reactor 00:01:51.818 LINK reactor_perf 00:01:52.076 LINK vhost_fuzz 00:01:52.076 LINK spdk_nvme_perf 00:01:52.076 LINK app_repeat 00:01:52.076 LINK led 00:01:52.076 LINK spdk_nvme_identify 00:01:52.076 LINK spdk_top 00:01:52.076 LINK hello_sock 00:01:52.076 CC test/nvme/err_injection/err_injection.o 00:01:52.076 CC test/nvme/overhead/overhead.o 00:01:52.076 CC test/nvme/reset/reset.o 00:01:52.076 CC test/nvme/e2edp/nvme_dp.o 00:01:52.076 CC test/nvme/sgl/sgl.o 00:01:52.076 CC test/nvme/aer/aer.o 00:01:52.076 CC test/nvme/reserve/reserve.o 00:01:52.076 LINK thread 00:01:52.076 CC test/nvme/startup/startup.o 00:01:52.076 CC test/nvme/connect_stress/connect_stress.o 00:01:52.076 CC test/nvme/simple_copy/simple_copy.o 00:01:52.076 CC test/blobfs/mkfs/mkfs.o 00:01:52.076 CC test/nvme/boot_partition/boot_partition.o 00:01:52.076 CC test/accel/dif/dif.o 00:01:52.076 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:52.076 CC test/nvme/compliance/nvme_compliance.o 00:01:52.076 CC test/nvme/fused_ordering/fused_ordering.o 00:01:52.336 CC test/nvme/fdp/fdp.o 00:01:52.336 LINK vhost 00:01:52.336 CC test/nvme/cuse/cuse.o 00:01:52.336 LINK idxd_perf 00:01:52.336 CC test/lvol/esnap/esnap.o 00:01:52.336 LINK scheduler 00:01:52.336 LINK err_injection 00:01:52.336 LINK reserve 00:01:52.336 LINK mkfs 00:01:52.594 LINK startup 00:01:52.594 LINK fused_ordering 00:01:52.594 LINK boot_partition 00:01:52.594 LINK doorbell_aers 00:01:52.594 LINK connect_stress 00:01:52.594 LINK aer 00:01:52.594 LINK reset 00:01:52.594 LINK simple_copy 00:01:52.594 LINK sgl 00:01:52.594 LINK overhead 00:01:52.594 LINK memory_ut 00:01:52.594 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:52.594 CC examples/nvme/hello_world/hello_world.o 00:01:52.595 CC examples/nvme/abort/abort.o 00:01:52.595 CC examples/nvme/reconnect/reconnect.o 00:01:52.595 LINK nvme_dp 00:01:52.595 CC examples/nvme/hotplug/hotplug.o 00:01:52.595 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:52.595 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:52.595 CC examples/nvme/arbitration/arbitration.o 00:01:52.595 LINK nvme_compliance 00:01:52.595 CC examples/accel/perf/accel_perf.o 00:01:52.595 CC examples/blob/hello_world/hello_blob.o 00:01:52.595 CC examples/blob/cli/blobcli.o 00:01:52.595 LINK fdp 00:01:52.853 LINK cmb_copy 00:01:52.853 LINK dif 00:01:52.853 LINK pmr_persistence 00:01:52.853 LINK hotplug 00:01:52.853 LINK hello_world 00:01:53.111 LINK abort 00:01:53.111 LINK reconnect 00:01:53.111 LINK hello_blob 00:01:53.111 LINK arbitration 00:01:53.111 LINK nvme_manage 00:01:53.111 LINK accel_perf 00:01:53.111 LINK blobcli 00:01:53.369 CC test/bdev/bdevio/bdevio.o 00:01:53.369 LINK iscsi_fuzz 00:01:53.627 CC examples/bdev/hello_world/hello_bdev.o 00:01:53.627 CC examples/bdev/bdevperf/bdevperf.o 00:01:53.627 LINK bdevio 00:01:53.885 LINK hello_bdev 00:01:53.885 LINK cuse 00:01:54.143 LINK bdevperf 00:01:54.710 CC examples/nvmf/nvmf/nvmf.o 00:01:54.968 LINK nvmf 00:01:57.494 LINK esnap 00:01:57.494 00:01:57.494 real 0m48.531s 00:01:57.494 user 10m7.209s 00:01:57.494 sys 2m27.394s 00:01:57.494 18:56:37 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:57.494 18:56:37 make -- common/autotest_common.sh@10 -- $ set +x 00:01:57.494 ************************************ 00:01:57.494 END TEST make 00:01:57.494 ************************************ 00:01:57.752 18:56:37 -- common/autotest_common.sh@1142 -- $ return 0 00:01:57.752 18:56:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:57.752 18:56:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:57.752 18:56:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:57.752 18:56:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.752 18:56:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:57.752 18:56:37 -- pm/common@44 -- $ pid=3094059 00:01:57.752 18:56:37 -- pm/common@50 -- $ kill -TERM 3094059 00:01:57.752 18:56:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.752 18:56:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:57.752 18:56:37 -- pm/common@44 -- $ pid=3094061 00:01:57.752 18:56:37 -- pm/common@50 -- $ kill -TERM 3094061 00:01:57.752 18:56:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.752 18:56:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:57.752 18:56:37 -- pm/common@44 -- $ pid=3094063 00:01:57.752 18:56:37 -- pm/common@50 -- $ kill -TERM 3094063 00:01:57.752 18:56:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.752 18:56:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:57.752 18:56:37 -- pm/common@44 -- $ pid=3094094 00:01:57.752 18:56:37 -- pm/common@50 -- $ sudo -E kill -TERM 3094094 00:01:57.752 18:56:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:57.752 18:56:38 -- nvmf/common.sh@7 -- # uname -s 00:01:57.752 18:56:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:57.752 18:56:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:57.752 18:56:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:57.752 18:56:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:57.753 18:56:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:57.753 18:56:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:57.753 18:56:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:57.753 18:56:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:57.753 18:56:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:57.753 18:56:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:57.753 18:56:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:57.753 18:56:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:57.753 18:56:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:57.753 18:56:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:57.753 18:56:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:57.753 18:56:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:57.753 18:56:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:57.753 18:56:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:57.753 18:56:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.753 18:56:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.753 18:56:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.753 18:56:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.753 18:56:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.753 18:56:38 -- paths/export.sh@5 -- # export PATH 00:01:57.753 18:56:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.753 18:56:38 -- nvmf/common.sh@47 -- # : 0 00:01:57.753 18:56:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:57.753 18:56:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:57.753 18:56:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:57.753 18:56:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:57.753 18:56:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:57.753 18:56:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:57.753 18:56:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:57.753 18:56:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:57.753 18:56:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:57.753 18:56:38 -- spdk/autotest.sh@32 -- # uname -s 00:01:57.753 18:56:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:57.753 18:56:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:57.753 18:56:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.753 18:56:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:57.753 18:56:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:57.753 18:56:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:57.753 18:56:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:57.753 18:56:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:57.753 18:56:38 -- spdk/autotest.sh@48 -- # udevadm_pid=3149532 00:01:57.753 18:56:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:57.753 18:56:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:57.753 18:56:38 -- pm/common@17 -- # local monitor 00:01:57.753 18:56:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.753 18:56:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.753 18:56:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.753 18:56:38 -- pm/common@21 -- # date +%s 00:01:57.753 18:56:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.753 18:56:38 -- pm/common@21 -- # date +%s 00:01:57.753 18:56:38 -- pm/common@25 -- # sleep 1 00:01:57.753 18:56:38 -- pm/common@21 -- # date +%s 00:01:57.753 18:56:38 -- pm/common@21 -- # date +%s 00:01:57.753 18:56:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062598 00:01:57.753 18:56:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062598 00:01:57.753 18:56:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062598 00:01:57.753 18:56:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721062598 00:01:57.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062598_collect-vmstat.pm.log 00:01:57.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062598_collect-cpu-load.pm.log 00:01:57.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062598_collect-cpu-temp.pm.log 00:01:57.753 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721062598_collect-bmc-pm.bmc.pm.log 00:01:58.688 18:56:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:58.688 18:56:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:58.688 18:56:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:58.688 18:56:39 -- common/autotest_common.sh@10 -- # set +x 00:01:58.688 18:56:39 -- spdk/autotest.sh@59 -- # create_test_list 00:01:58.688 18:56:39 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:58.688 18:56:39 -- common/autotest_common.sh@10 -- # set +x 00:01:58.688 18:56:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:58.688 18:56:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.688 18:56:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.688 18:56:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:58.688 18:56:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.688 18:56:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:58.688 18:56:39 -- common/autotest_common.sh@1455 -- # uname 00:01:58.688 18:56:39 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:58.688 18:56:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:58.688 18:56:39 -- common/autotest_common.sh@1475 -- # uname 00:01:58.688 18:56:39 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:58.688 18:56:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:58.688 18:56:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:58.688 18:56:39 -- spdk/autotest.sh@72 -- # hash lcov 00:01:58.688 18:56:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:58.688 18:56:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:58.688 --rc lcov_branch_coverage=1 00:01:58.688 --rc lcov_function_coverage=1 00:01:58.688 --rc genhtml_branch_coverage=1 00:01:58.688 --rc genhtml_function_coverage=1 00:01:58.688 --rc genhtml_legend=1 00:01:58.688 --rc geninfo_all_blocks=1 00:01:58.688 ' 00:01:58.688 18:56:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:58.688 --rc lcov_branch_coverage=1 00:01:58.688 --rc lcov_function_coverage=1 00:01:58.688 --rc genhtml_branch_coverage=1 00:01:58.688 --rc genhtml_function_coverage=1 00:01:58.688 --rc genhtml_legend=1 00:01:58.688 --rc geninfo_all_blocks=1 00:01:58.688 ' 00:01:58.688 18:56:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:58.688 --rc lcov_branch_coverage=1 00:01:58.688 --rc lcov_function_coverage=1 00:01:58.688 --rc genhtml_branch_coverage=1 00:01:58.688 --rc genhtml_function_coverage=1 00:01:58.688 --rc genhtml_legend=1 00:01:58.688 --rc geninfo_all_blocks=1 00:01:58.688 --no-external' 00:01:58.688 18:56:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:58.688 --rc lcov_branch_coverage=1 00:01:58.688 --rc lcov_function_coverage=1 00:01:58.688 --rc genhtml_branch_coverage=1 00:01:58.688 --rc genhtml_function_coverage=1 00:01:58.688 --rc genhtml_legend=1 00:01:58.688 --rc geninfo_all_blocks=1 00:01:58.688 --no-external' 00:01:58.688 18:56:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:58.947 lcov: LCOV version 1.14 00:01:58.947 18:56:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:13.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:13.817 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:28.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:28.745 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:28.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:28.746 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:32.028 18:57:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:32.028 18:57:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:32.028 18:57:12 -- common/autotest_common.sh@10 -- # set +x 00:02:32.028 18:57:12 -- spdk/autotest.sh@91 -- # rm -f 00:02:32.028 18:57:12 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.961 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:32.961 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:32.961 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:32.961 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:32.961 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:32.961 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:33.219 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:33.219 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:33.219 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:33.219 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:33.219 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:33.219 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:33.219 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:33.219 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:33.219 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:33.219 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:33.219 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:33.219 18:57:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:33.219 18:57:13 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:33.219 18:57:13 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:33.219 18:57:13 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:33.219 18:57:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:33.219 18:57:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:33.219 18:57:13 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:33.219 18:57:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:33.219 18:57:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:33.219 18:57:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:33.219 18:57:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:33.219 18:57:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:33.219 18:57:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:33.219 18:57:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:33.219 18:57:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:33.477 No valid GPT data, bailing 00:02:33.477 18:57:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:33.477 18:57:13 -- scripts/common.sh@391 -- # pt= 00:02:33.477 18:57:13 -- scripts/common.sh@392 -- # return 1 00:02:33.477 18:57:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:33.477 1+0 records in 00:02:33.477 1+0 records out 00:02:33.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00211789 s, 495 MB/s 00:02:33.477 18:57:13 -- spdk/autotest.sh@118 -- # sync 00:02:33.477 18:57:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:33.477 18:57:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:33.477 18:57:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:35.374 18:57:15 -- spdk/autotest.sh@124 -- # uname -s 00:02:35.374 18:57:15 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:35.374 18:57:15 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:35.374 18:57:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:35.374 18:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:35.374 18:57:15 -- common/autotest_common.sh@10 -- # set +x 00:02:35.374 ************************************ 00:02:35.374 START TEST setup.sh 00:02:35.374 ************************************ 00:02:35.374 18:57:15 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:35.374 * Looking for test storage... 00:02:35.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:35.374 18:57:15 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:35.374 18:57:15 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:35.374 18:57:15 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:35.374 18:57:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:35.374 18:57:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:35.374 18:57:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:35.374 ************************************ 00:02:35.374 START TEST acl 00:02:35.374 ************************************ 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:35.374 * Looking for test storage... 00:02:35.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:35.374 18:57:15 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.374 18:57:15 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:35.374 18:57:15 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:35.374 18:57:15 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:35.374 18:57:15 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:35.374 18:57:15 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:35.374 18:57:15 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:35.374 18:57:15 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.374 18:57:15 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.747 18:57:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:36.747 18:57:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:36.747 18:57:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.747 18:57:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:36.747 18:57:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.747 18:57:17 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:38.121 Hugepages 00:02:38.121 node hugesize free / total 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 00:02:38.121 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.121 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:38.122 18:57:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:38.122 18:57:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:38.122 18:57:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:38.122 18:57:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:38.122 ************************************ 00:02:38.122 START TEST denied 00:02:38.122 ************************************ 00:02:38.122 18:57:18 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:38.122 18:57:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:38.122 18:57:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:38.122 18:57:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:38.122 18:57:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.122 18:57:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:39.496 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:39.496 18:57:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.024 00:02:42.024 real 0m3.847s 00:02:42.024 user 0m1.104s 00:02:42.024 sys 0m1.837s 00:02:42.024 18:57:22 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:42.024 18:57:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:42.024 ************************************ 00:02:42.024 END TEST denied 00:02:42.024 ************************************ 00:02:42.024 18:57:22 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:42.024 18:57:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:42.024 18:57:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:42.024 18:57:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:42.024 18:57:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:42.024 ************************************ 00:02:42.024 START TEST allowed 00:02:42.024 ************************************ 00:02:42.024 18:57:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:42.024 18:57:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:42.024 18:57:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:42.024 18:57:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:42.024 18:57:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.024 18:57:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:44.558 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.558 18:57:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:44.558 18:57:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:44.558 18:57:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:44.558 18:57:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.558 18:57:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.493 00:02:45.493 real 0m3.741s 00:02:45.493 user 0m0.967s 00:02:45.493 sys 0m1.579s 00:02:45.493 18:57:25 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:45.493 18:57:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:45.493 ************************************ 00:02:45.493 END TEST allowed 00:02:45.493 ************************************ 00:02:45.752 18:57:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:45.752 00:02:45.752 real 0m10.272s 00:02:45.752 user 0m3.142s 00:02:45.752 sys 0m5.099s 00:02:45.752 18:57:25 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:45.752 18:57:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:45.752 ************************************ 00:02:45.752 END TEST acl 00:02:45.752 ************************************ 00:02:45.752 18:57:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:45.752 18:57:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:45.752 18:57:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.752 18:57:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.752 18:57:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:45.752 ************************************ 00:02:45.752 START TEST hugepages 00:02:45.752 ************************************ 00:02:45.752 18:57:25 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:45.752 * Looking for test storage... 00:02:45.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 43715976 kB' 'MemAvailable: 47217968 kB' 'Buffers: 2704 kB' 'Cached: 10274968 kB' 'SwapCached: 0 kB' 'Active: 7276856 kB' 'Inactive: 3506596 kB' 'Active(anon): 6881752 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509040 kB' 'Mapped: 198220 kB' 'Shmem: 6375972 kB' 'KReclaimable: 188264 kB' 'Slab: 557404 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 369140 kB' 'KernelStack: 13120 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562296 kB' 'Committed_AS: 7991388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.752 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.753 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:45.754 18:57:26 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:45.754 18:57:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:45.754 18:57:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.754 18:57:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:45.754 ************************************ 00:02:45.754 START TEST default_setup 00:02:45.754 ************************************ 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.754 18:57:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.128 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:47.128 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:47.128 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:47.128 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:47.128 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:47.128 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:47.128 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:47.128 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:47.128 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:48.139 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45809112 kB' 'MemAvailable: 49311104 kB' 'Buffers: 2704 kB' 'Cached: 10275068 kB' 'SwapCached: 0 kB' 'Active: 7294684 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899580 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526848 kB' 'Mapped: 198284 kB' 'Shmem: 6376072 kB' 'KReclaimable: 188264 kB' 'Slab: 556812 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368548 kB' 'KernelStack: 13040 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8012880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.139 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.140 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45809152 kB' 'MemAvailable: 49311144 kB' 'Buffers: 2704 kB' 'Cached: 10275072 kB' 'SwapCached: 0 kB' 'Active: 7295040 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899936 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527188 kB' 'Mapped: 198256 kB' 'Shmem: 6376076 kB' 'KReclaimable: 188264 kB' 'Slab: 556812 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368548 kB' 'KernelStack: 13056 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8012900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.141 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.142 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45809848 kB' 'MemAvailable: 49311840 kB' 'Buffers: 2704 kB' 'Cached: 10275072 kB' 'SwapCached: 0 kB' 'Active: 7294912 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899808 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527028 kB' 'Mapped: 198180 kB' 'Shmem: 6376076 kB' 'KReclaimable: 188264 kB' 'Slab: 556796 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368532 kB' 'KernelStack: 13056 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8012920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.143 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.144 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.145 nr_hugepages=1024 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.145 resv_hugepages=0 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.145 surplus_hugepages=0 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.145 anon_hugepages=0 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45810104 kB' 'MemAvailable: 49312096 kB' 'Buffers: 2704 kB' 'Cached: 10275112 kB' 'SwapCached: 0 kB' 'Active: 7294988 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899884 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527068 kB' 'Mapped: 198180 kB' 'Shmem: 6376116 kB' 'KReclaimable: 188264 kB' 'Slab: 556796 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368532 kB' 'KernelStack: 13072 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8012944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.145 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.146 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21369428 kB' 'MemUsed: 11507512 kB' 'SwapCached: 0 kB' 'Active: 4995344 kB' 'Inactive: 3265492 kB' 'Active(anon): 4806260 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981640 kB' 'Mapped: 62392 kB' 'AnonPages: 282416 kB' 'Shmem: 4527064 kB' 'KernelStack: 7512 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 310300 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 197064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.147 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:48.148 node0=1024 expecting 1024 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:48.148 00:02:48.148 real 0m2.358s 00:02:48.148 user 0m0.649s 00:02:48.148 sys 0m0.829s 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:48.148 18:57:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:48.148 ************************************ 00:02:48.148 END TEST default_setup 00:02:48.148 ************************************ 00:02:48.148 18:57:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:48.148 18:57:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:48.148 18:57:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.148 18:57:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.148 18:57:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.148 ************************************ 00:02:48.148 START TEST per_node_1G_alloc 00:02:48.148 ************************************ 00:02:48.148 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:48.148 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:48.148 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:48.148 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.149 18:57:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.528 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.528 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.528 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.528 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.528 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.528 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.528 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.528 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.528 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.528 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.528 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.528 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.528 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.528 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.528 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.528 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.528 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45787596 kB' 'MemAvailable: 49289588 kB' 'Buffers: 2704 kB' 'Cached: 10275176 kB' 'SwapCached: 0 kB' 'Active: 7295524 kB' 'Inactive: 3506596 kB' 'Active(anon): 6900420 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527408 kB' 'Mapped: 198496 kB' 'Shmem: 6376180 kB' 'KReclaimable: 188264 kB' 'Slab: 556684 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368420 kB' 'KernelStack: 13120 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8012988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196400 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.528 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.529 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45788468 kB' 'MemAvailable: 49290460 kB' 'Buffers: 2704 kB' 'Cached: 10275176 kB' 'SwapCached: 0 kB' 'Active: 7295548 kB' 'Inactive: 3506596 kB' 'Active(anon): 6900444 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527456 kB' 'Mapped: 198204 kB' 'Shmem: 6376180 kB' 'KReclaimable: 188264 kB' 'Slab: 556732 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368468 kB' 'KernelStack: 13136 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196384 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.530 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.531 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45788472 kB' 'MemAvailable: 49290464 kB' 'Buffers: 2704 kB' 'Cached: 10275196 kB' 'SwapCached: 0 kB' 'Active: 7295440 kB' 'Inactive: 3506596 kB' 'Active(anon): 6900336 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527352 kB' 'Mapped: 198120 kB' 'Shmem: 6376200 kB' 'KReclaimable: 188264 kB' 'Slab: 556708 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368444 kB' 'KernelStack: 13152 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196352 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.532 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.533 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.534 nr_hugepages=1024 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.534 resv_hugepages=0 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.534 surplus_hugepages=0 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.534 anon_hugepages=0 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45788472 kB' 'MemAvailable: 49290464 kB' 'Buffers: 2704 kB' 'Cached: 10275220 kB' 'SwapCached: 0 kB' 'Active: 7295504 kB' 'Inactive: 3506596 kB' 'Active(anon): 6900400 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527360 kB' 'Mapped: 198120 kB' 'Shmem: 6376224 kB' 'KReclaimable: 188264 kB' 'Slab: 556704 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368440 kB' 'KernelStack: 13152 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196336 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.534 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.535 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22422900 kB' 'MemUsed: 10454040 kB' 'SwapCached: 0 kB' 'Active: 4995952 kB' 'Inactive: 3265492 kB' 'Active(anon): 4806868 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981672 kB' 'Mapped: 62332 kB' 'AnonPages: 282860 kB' 'Shmem: 4527096 kB' 'KernelStack: 7592 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 310240 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 197004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.536 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23373384 kB' 'MemUsed: 4291368 kB' 'SwapCached: 0 kB' 'Active: 2299568 kB' 'Inactive: 241104 kB' 'Active(anon): 2093548 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2296296 kB' 'Mapped: 135788 kB' 'AnonPages: 244512 kB' 'Shmem: 1849172 kB' 'KernelStack: 5560 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75028 kB' 'Slab: 246464 kB' 'SReclaimable: 75028 kB' 'SUnreclaim: 171436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.537 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.538 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:49.538 node0=512 expecting 512 00:02:49.539 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.539 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.539 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.539 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:49.539 node1=512 expecting 512 00:02:49.539 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:49.539 00:02:49.539 real 0m1.434s 00:02:49.539 user 0m0.581s 00:02:49.539 sys 0m0.816s 00:02:49.539 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:49.539 18:57:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:49.539 ************************************ 00:02:49.539 END TEST per_node_1G_alloc 00:02:49.539 ************************************ 00:02:49.819 18:57:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:49.819 18:57:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:49.819 18:57:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.819 18:57:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.819 18:57:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.819 ************************************ 00:02:49.820 START TEST even_2G_alloc 00:02:49.820 ************************************ 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.820 18:57:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.753 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.753 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.753 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.753 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.753 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.753 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.753 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.753 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.754 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.754 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.754 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.754 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.754 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.754 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.754 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.754 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.754 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789064 kB' 'MemAvailable: 49291056 kB' 'Buffers: 2704 kB' 'Cached: 10275316 kB' 'SwapCached: 0 kB' 'Active: 7295068 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899964 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526824 kB' 'Mapped: 198292 kB' 'Shmem: 6376320 kB' 'KReclaimable: 188264 kB' 'Slab: 556484 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368220 kB' 'KernelStack: 13056 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196448 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.018 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.019 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789332 kB' 'MemAvailable: 49291324 kB' 'Buffers: 2704 kB' 'Cached: 10275320 kB' 'SwapCached: 0 kB' 'Active: 7294704 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899600 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526456 kB' 'Mapped: 198272 kB' 'Shmem: 6376324 kB' 'KReclaimable: 188264 kB' 'Slab: 556476 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368212 kB' 'KernelStack: 13024 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196448 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.020 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.021 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789540 kB' 'MemAvailable: 49291532 kB' 'Buffers: 2704 kB' 'Cached: 10275336 kB' 'SwapCached: 0 kB' 'Active: 7294836 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899732 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526568 kB' 'Mapped: 198196 kB' 'Shmem: 6376340 kB' 'KReclaimable: 188264 kB' 'Slab: 556484 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368220 kB' 'KernelStack: 13088 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196448 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.022 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.023 nr_hugepages=1024 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.023 resv_hugepages=0 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.023 surplus_hugepages=0 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.023 anon_hugepages=0 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.023 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789540 kB' 'MemAvailable: 49291532 kB' 'Buffers: 2704 kB' 'Cached: 10275360 kB' 'SwapCached: 0 kB' 'Active: 7294884 kB' 'Inactive: 3506596 kB' 'Active(anon): 6899780 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526564 kB' 'Mapped: 198196 kB' 'Shmem: 6376364 kB' 'KReclaimable: 188264 kB' 'Slab: 556484 kB' 'SReclaimable: 188264 kB' 'SUnreclaim: 368220 kB' 'KernelStack: 13088 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8013320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196448 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.024 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22426216 kB' 'MemUsed: 10450724 kB' 'SwapCached: 0 kB' 'Active: 4995848 kB' 'Inactive: 3265492 kB' 'Active(anon): 4806764 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981680 kB' 'Mapped: 62392 kB' 'AnonPages: 282772 kB' 'Shmem: 4527104 kB' 'KernelStack: 7544 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113236 kB' 'Slab: 310088 kB' 'SReclaimable: 113236 kB' 'SUnreclaim: 196852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.025 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.026 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23363072 kB' 'MemUsed: 4301680 kB' 'SwapCached: 0 kB' 'Active: 2298880 kB' 'Inactive: 241104 kB' 'Active(anon): 2092860 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2296424 kB' 'Mapped: 135804 kB' 'AnonPages: 243588 kB' 'Shmem: 1849300 kB' 'KernelStack: 5528 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75028 kB' 'Slab: 246396 kB' 'SReclaimable: 75028 kB' 'SUnreclaim: 171368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.027 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.028 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:51.287 node0=512 expecting 512 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:51.287 node1=512 expecting 512 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:51.287 00:02:51.287 real 0m1.453s 00:02:51.287 user 0m0.605s 00:02:51.287 sys 0m0.809s 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:51.287 18:57:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:51.287 ************************************ 00:02:51.287 END TEST even_2G_alloc 00:02:51.287 ************************************ 00:02:51.287 18:57:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:51.287 18:57:31 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:51.287 18:57:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.287 18:57:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.287 18:57:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.287 ************************************ 00:02:51.287 START TEST odd_alloc 00:02:51.287 ************************************ 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.287 18:57:31 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.220 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.221 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.221 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.221 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.221 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.221 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.221 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.221 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.221 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.221 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.221 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.221 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.221 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.221 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.221 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.221 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.221 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.484 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45813592 kB' 'MemAvailable: 49315568 kB' 'Buffers: 2704 kB' 'Cached: 10275448 kB' 'SwapCached: 0 kB' 'Active: 7291492 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896388 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523224 kB' 'Mapped: 197328 kB' 'Shmem: 6376452 kB' 'KReclaimable: 188232 kB' 'Slab: 556264 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 368032 kB' 'KernelStack: 13072 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8000568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196336 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.485 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45813044 kB' 'MemAvailable: 49315020 kB' 'Buffers: 2704 kB' 'Cached: 10275452 kB' 'SwapCached: 0 kB' 'Active: 7292252 kB' 'Inactive: 3506596 kB' 'Active(anon): 6897148 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524008 kB' 'Mapped: 197328 kB' 'Shmem: 6376456 kB' 'KReclaimable: 188232 kB' 'Slab: 556272 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 368040 kB' 'KernelStack: 13200 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7999224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196464 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.486 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.487 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45811740 kB' 'MemAvailable: 49313716 kB' 'Buffers: 2704 kB' 'Cached: 10275472 kB' 'SwapCached: 0 kB' 'Active: 7293108 kB' 'Inactive: 3506596 kB' 'Active(anon): 6898004 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524828 kB' 'Mapped: 197264 kB' 'Shmem: 6376476 kB' 'KReclaimable: 188232 kB' 'Slab: 556304 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 368072 kB' 'KernelStack: 13072 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 8000608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196416 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.488 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:52.489 nr_hugepages=1025 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.489 resv_hugepages=0 00:02:52.489 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.490 surplus_hugepages=0 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.490 anon_hugepages=0 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45808972 kB' 'MemAvailable: 49310948 kB' 'Buffers: 2704 kB' 'Cached: 10275472 kB' 'SwapCached: 0 kB' 'Active: 7292832 kB' 'Inactive: 3506596 kB' 'Active(anon): 6897728 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524556 kB' 'Mapped: 197264 kB' 'Shmem: 6376476 kB' 'KReclaimable: 188232 kB' 'Slab: 556288 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 368056 kB' 'KernelStack: 13376 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609848 kB' 'Committed_AS: 7999264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196576 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.490 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22431220 kB' 'MemUsed: 10445720 kB' 'SwapCached: 0 kB' 'Active: 4993992 kB' 'Inactive: 3265492 kB' 'Active(anon): 4804908 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981708 kB' 'Mapped: 61656 kB' 'AnonPages: 281024 kB' 'Shmem: 4527132 kB' 'KernelStack: 7432 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113204 kB' 'Slab: 310024 kB' 'SReclaimable: 113204 kB' 'SUnreclaim: 196820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.491 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.492 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 23376664 kB' 'MemUsed: 4288088 kB' 'SwapCached: 0 kB' 'Active: 2297000 kB' 'Inactive: 241104 kB' 'Active(anon): 2090980 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2296468 kB' 'Mapped: 135608 kB' 'AnonPages: 241700 kB' 'Shmem: 1849344 kB' 'KernelStack: 5560 kB' 'PageTables: 3580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75028 kB' 'Slab: 246264 kB' 'SReclaimable: 75028 kB' 'SUnreclaim: 171236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.493 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:52.494 node0=512 expecting 513 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:52.494 node1=513 expecting 512 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:52.494 00:02:52.494 real 0m1.387s 00:02:52.494 user 0m0.582s 00:02:52.494 sys 0m0.766s 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.494 18:57:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:52.494 ************************************ 00:02:52.494 END TEST odd_alloc 00:02:52.494 ************************************ 00:02:52.494 18:57:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:52.494 18:57:32 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:52.494 18:57:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.494 18:57:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.494 18:57:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.753 ************************************ 00:02:52.753 START TEST custom_alloc 00:02:52.753 ************************************ 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.753 18:57:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.695 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.695 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.695 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.695 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.695 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.695 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.695 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.695 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.695 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.695 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.695 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.695 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.695 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.695 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.695 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.695 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.695 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.695 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:53.695 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:53.695 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.695 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.695 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44764952 kB' 'MemAvailable: 48266928 kB' 'Buffers: 2704 kB' 'Cached: 10275584 kB' 'SwapCached: 0 kB' 'Active: 7291460 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896356 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523004 kB' 'Mapped: 197396 kB' 'Shmem: 6376588 kB' 'KReclaimable: 188232 kB' 'Slab: 556236 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 368004 kB' 'KernelStack: 13008 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7998468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196320 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.696 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.961 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.962 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44764784 kB' 'MemAvailable: 48266760 kB' 'Buffers: 2704 kB' 'Cached: 10275584 kB' 'SwapCached: 0 kB' 'Active: 7291608 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896504 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523108 kB' 'Mapped: 197360 kB' 'Shmem: 6376588 kB' 'KReclaimable: 188232 kB' 'Slab: 556212 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367980 kB' 'KernelStack: 12960 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7998488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196288 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.963 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.964 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44765232 kB' 'MemAvailable: 48267208 kB' 'Buffers: 2704 kB' 'Cached: 10275600 kB' 'SwapCached: 0 kB' 'Active: 7291156 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896052 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522644 kB' 'Mapped: 197284 kB' 'Shmem: 6376604 kB' 'KReclaimable: 188232 kB' 'Slab: 556220 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367988 kB' 'KernelStack: 12944 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7998508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.965 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.966 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.967 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:53.968 nr_hugepages=1536 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.968 resv_hugepages=0 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.968 surplus_hugepages=0 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.968 anon_hugepages=0 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 44765900 kB' 'MemAvailable: 48267876 kB' 'Buffers: 2704 kB' 'Cached: 10275604 kB' 'SwapCached: 0 kB' 'Active: 7291320 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896216 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522804 kB' 'Mapped: 197284 kB' 'Shmem: 6376608 kB' 'KReclaimable: 188232 kB' 'Slab: 556220 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367988 kB' 'KernelStack: 12944 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086584 kB' 'Committed_AS: 7998528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.968 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.970 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22425368 kB' 'MemUsed: 10451572 kB' 'SwapCached: 0 kB' 'Active: 4993648 kB' 'Inactive: 3265492 kB' 'Active(anon): 4804564 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981812 kB' 'Mapped: 61664 kB' 'AnonPages: 280456 kB' 'Shmem: 4527236 kB' 'KernelStack: 7416 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113204 kB' 'Slab: 310072 kB' 'SReclaimable: 113204 kB' 'SUnreclaim: 196868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.972 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664752 kB' 'MemFree: 22339620 kB' 'MemUsed: 5325132 kB' 'SwapCached: 0 kB' 'Active: 2299040 kB' 'Inactive: 241104 kB' 'Active(anon): 2093020 kB' 'Inactive(anon): 0 kB' 'Active(file): 206020 kB' 'Inactive(file): 241104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2296560 kB' 'Mapped: 135620 kB' 'AnonPages: 243740 kB' 'Shmem: 1849436 kB' 'KernelStack: 5560 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 75028 kB' 'Slab: 246148 kB' 'SReclaimable: 75028 kB' 'SUnreclaim: 171120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:53.975 node0=512 expecting 512 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:53.975 node1=1024 expecting 1024 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:53.975 00:02:53.975 real 0m1.344s 00:02:53.975 user 0m0.546s 00:02:53.975 sys 0m0.759s 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.975 18:57:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.975 ************************************ 00:02:53.975 END TEST custom_alloc 00:02:53.975 ************************************ 00:02:53.975 18:57:34 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:53.975 18:57:34 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:53.975 18:57:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.975 18:57:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.975 18:57:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.975 ************************************ 00:02:53.975 START TEST no_shrink_alloc 00:02:53.975 ************************************ 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.975 18:57:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:54.909 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:54.909 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.909 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:54.909 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:54.909 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:54.909 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:54.909 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:54.909 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:54.909 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:54.909 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:54.909 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:54.909 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:54.909 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:54.909 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:54.909 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:54.909 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:54.909 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45778356 kB' 'MemAvailable: 49280332 kB' 'Buffers: 2704 kB' 'Cached: 10275708 kB' 'SwapCached: 0 kB' 'Active: 7297092 kB' 'Inactive: 3506596 kB' 'Active(anon): 6901988 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528472 kB' 'Mapped: 197816 kB' 'Shmem: 6376712 kB' 'KReclaimable: 188232 kB' 'Slab: 556172 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367940 kB' 'KernelStack: 12960 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8004912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196260 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.177 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.178 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45778400 kB' 'MemAvailable: 49280376 kB' 'Buffers: 2704 kB' 'Cached: 10275708 kB' 'SwapCached: 0 kB' 'Active: 7296636 kB' 'Inactive: 3506596 kB' 'Active(anon): 6901532 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528412 kB' 'Mapped: 197804 kB' 'Shmem: 6376712 kB' 'KReclaimable: 188232 kB' 'Slab: 556192 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367960 kB' 'KernelStack: 12928 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 8004928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.179 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.180 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45783364 kB' 'MemAvailable: 49285340 kB' 'Buffers: 2704 kB' 'Cached: 10275728 kB' 'SwapCached: 0 kB' 'Active: 7291528 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896424 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522956 kB' 'Mapped: 197304 kB' 'Shmem: 6376732 kB' 'KReclaimable: 188232 kB' 'Slab: 556296 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 368064 kB' 'KernelStack: 12960 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.181 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.182 nr_hugepages=1024 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.182 resv_hugepages=0 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.182 surplus_hugepages=0 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.182 anon_hugepages=0 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.182 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45783364 kB' 'MemAvailable: 49285340 kB' 'Buffers: 2704 kB' 'Cached: 10275748 kB' 'SwapCached: 0 kB' 'Active: 7291600 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896496 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522952 kB' 'Mapped: 197304 kB' 'Shmem: 6376752 kB' 'KReclaimable: 188232 kB' 'Slab: 556296 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 368064 kB' 'KernelStack: 12960 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.183 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.184 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21372296 kB' 'MemUsed: 11504644 kB' 'SwapCached: 0 kB' 'Active: 4993536 kB' 'Inactive: 3265492 kB' 'Active(anon): 4804452 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981828 kB' 'Mapped: 61664 kB' 'AnonPages: 280364 kB' 'Shmem: 4527252 kB' 'KernelStack: 7400 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113204 kB' 'Slab: 310188 kB' 'SReclaimable: 113204 kB' 'SUnreclaim: 196984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.444 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.445 node0=1024 expecting 1024 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.445 18:57:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.379 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.379 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.379 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.379 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.379 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.379 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.379 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.379 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.379 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.379 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.379 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.379 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.379 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.379 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.379 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.379 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.379 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.645 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789424 kB' 'MemAvailable: 49291400 kB' 'Buffers: 2704 kB' 'Cached: 10275816 kB' 'SwapCached: 0 kB' 'Active: 7291856 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896752 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523240 kB' 'Mapped: 197400 kB' 'Shmem: 6376820 kB' 'KReclaimable: 188232 kB' 'Slab: 556136 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367904 kB' 'KernelStack: 12992 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196304 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.645 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.646 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45789900 kB' 'MemAvailable: 49291876 kB' 'Buffers: 2704 kB' 'Cached: 10275820 kB' 'SwapCached: 0 kB' 'Active: 7292084 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896980 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523468 kB' 'Mapped: 197400 kB' 'Shmem: 6376824 kB' 'KReclaimable: 188232 kB' 'Slab: 556136 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367904 kB' 'KernelStack: 12944 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.647 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.648 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790240 kB' 'MemAvailable: 49292216 kB' 'Buffers: 2704 kB' 'Cached: 10275840 kB' 'SwapCached: 0 kB' 'Active: 7291192 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896088 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522460 kB' 'Mapped: 197316 kB' 'Shmem: 6376844 kB' 'KReclaimable: 188232 kB' 'Slab: 556128 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367896 kB' 'KernelStack: 12928 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7998712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.649 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.650 nr_hugepages=1024 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.650 resv_hugepages=0 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.650 surplus_hugepages=0 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.650 anon_hugepages=0 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.650 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541692 kB' 'MemFree: 45790492 kB' 'MemAvailable: 49292468 kB' 'Buffers: 2704 kB' 'Cached: 10275868 kB' 'SwapCached: 0 kB' 'Active: 7291828 kB' 'Inactive: 3506596 kB' 'Active(anon): 6896724 kB' 'Inactive(anon): 0 kB' 'Active(file): 395104 kB' 'Inactive(file): 3506596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523112 kB' 'Mapped: 197316 kB' 'Shmem: 6376872 kB' 'KReclaimable: 188232 kB' 'Slab: 556128 kB' 'SReclaimable: 188232 kB' 'SUnreclaim: 367896 kB' 'KernelStack: 12960 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610872 kB' 'Committed_AS: 7999100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 36288 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1822300 kB' 'DirectMap2M: 13826048 kB' 'DirectMap1G: 53477376 kB' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.651 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21361328 kB' 'MemUsed: 11515612 kB' 'SwapCached: 0 kB' 'Active: 4993456 kB' 'Inactive: 3265492 kB' 'Active(anon): 4804372 kB' 'Inactive(anon): 0 kB' 'Active(file): 189084 kB' 'Inactive(file): 3265492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7981828 kB' 'Mapped: 61672 kB' 'AnonPages: 280300 kB' 'Shmem: 4527252 kB' 'KernelStack: 7432 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113204 kB' 'Slab: 310052 kB' 'SReclaimable: 113204 kB' 'SUnreclaim: 196848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.652 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.653 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:56.654 node0=1024 expecting 1024 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:56.654 00:02:56.654 real 0m2.681s 00:02:56.654 user 0m1.126s 00:02:56.654 sys 0m1.475s 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:56.654 18:57:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.654 ************************************ 00:02:56.654 END TEST no_shrink_alloc 00:02:56.654 ************************************ 00:02:56.654 18:57:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:56.654 18:57:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:56.654 00:02:56.654 real 0m11.033s 00:02:56.654 user 0m4.240s 00:02:56.654 sys 0m5.700s 00:02:56.654 18:57:37 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:56.654 18:57:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.654 ************************************ 00:02:56.654 END TEST hugepages 00:02:56.654 ************************************ 00:02:56.654 18:57:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:56.654 18:57:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:56.654 18:57:37 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.654 18:57:37 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.654 18:57:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:56.654 ************************************ 00:02:56.654 START TEST driver 00:02:56.654 ************************************ 00:02:56.912 18:57:37 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:56.913 * Looking for test storage... 00:02:56.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.913 18:57:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:56.913 18:57:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.913 18:57:37 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.445 18:57:39 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:59.445 18:57:39 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.445 18:57:39 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.445 18:57:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:59.445 ************************************ 00:02:59.445 START TEST guess_driver 00:02:59.445 ************************************ 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:59.445 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:59.445 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:59.445 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:59.445 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:59.445 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:59.445 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:59.445 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:59.445 Looking for driver=vfio-pci 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.445 18:57:39 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.379 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.380 18:57:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.318 18:57:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.318 18:57:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.318 18:57:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.576 18:57:41 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:01.576 18:57:41 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:01.576 18:57:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.576 18:57:41 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.136 00:03:04.136 real 0m4.776s 00:03:04.136 user 0m1.049s 00:03:04.136 sys 0m1.844s 00:03:04.136 18:57:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.136 18:57:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:04.136 ************************************ 00:03:04.136 END TEST guess_driver 00:03:04.136 ************************************ 00:03:04.136 18:57:44 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:04.136 00:03:04.136 real 0m7.200s 00:03:04.136 user 0m1.599s 00:03:04.136 sys 0m2.780s 00:03:04.136 18:57:44 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.136 18:57:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:04.136 ************************************ 00:03:04.136 END TEST driver 00:03:04.136 ************************************ 00:03:04.136 18:57:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:04.136 18:57:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:04.136 18:57:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.136 18:57:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.136 18:57:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.136 ************************************ 00:03:04.136 START TEST devices 00:03:04.136 ************************************ 00:03:04.136 18:57:44 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:04.136 * Looking for test storage... 00:03:04.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.136 18:57:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:04.136 18:57:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:04.136 18:57:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.136 18:57:44 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:05.513 18:57:45 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:05.513 No valid GPT data, bailing 00:03:05.513 18:57:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:05.513 18:57:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:05.513 18:57:45 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:05.513 18:57:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.513 18:57:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:05.513 ************************************ 00:03:05.513 START TEST nvme_mount 00:03:05.513 ************************************ 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:05.513 18:57:45 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:06.891 Creating new GPT entries in memory. 00:03:06.891 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:06.891 other utilities. 00:03:06.891 18:57:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:06.891 18:57:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:06.891 18:57:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:06.891 18:57:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:06.891 18:57:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:07.826 Creating new GPT entries in memory. 00:03:07.826 The operation has completed successfully. 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3169870 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.826 18:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.761 18:57:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:08.761 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:08.761 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:08.762 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:08.762 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:09.020 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:09.020 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:09.020 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:09.020 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:09.020 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:09.020 18:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:09.020 18:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.020 18:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:09.020 18:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.278 18:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.214 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.472 18:57:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.846 18:57:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:11.846 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:11.846 00:03:11.846 real 0m6.168s 00:03:11.846 user 0m1.490s 00:03:11.846 sys 0m2.244s 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.846 18:57:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:11.846 ************************************ 00:03:11.846 END TEST nvme_mount 00:03:11.846 ************************************ 00:03:11.846 18:57:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:11.846 18:57:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:11.846 18:57:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.846 18:57:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.846 18:57:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:11.846 ************************************ 00:03:11.846 START TEST dm_mount 00:03:11.846 ************************************ 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:11.846 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:11.847 18:57:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:12.782 Creating new GPT entries in memory. 00:03:12.782 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:12.782 other utilities. 00:03:12.782 18:57:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:12.782 18:57:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:12.782 18:57:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:12.782 18:57:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:12.782 18:57:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:13.723 Creating new GPT entries in memory. 00:03:13.723 The operation has completed successfully. 00:03:13.723 18:57:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:13.723 18:57:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:13.723 18:57:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:13.723 18:57:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:13.723 18:57:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:15.101 The operation has completed successfully. 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3172255 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:15.101 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.102 18:57:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.034 18:57:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.407 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:17.408 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:17.408 00:03:17.408 real 0m5.612s 00:03:17.408 user 0m0.936s 00:03:17.408 sys 0m1.509s 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.408 18:57:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:17.408 ************************************ 00:03:17.408 END TEST dm_mount 00:03:17.408 ************************************ 00:03:17.408 18:57:57 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:17.408 18:57:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:17.408 18:57:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:17.408 18:57:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.408 18:57:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.408 18:57:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:17.408 18:57:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:17.408 18:57:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:17.665 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:17.665 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:17.665 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:17.665 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:17.665 18:57:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:17.665 18:57:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:17.665 18:57:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:17.665 18:57:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.665 18:57:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:17.665 18:57:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:17.665 18:57:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:17.665 00:03:17.665 real 0m13.682s 00:03:17.665 user 0m3.076s 00:03:17.665 sys 0m4.766s 00:03:17.665 18:57:58 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.665 18:57:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:17.665 ************************************ 00:03:17.665 END TEST devices 00:03:17.665 ************************************ 00:03:17.665 18:57:58 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:17.665 00:03:17.665 real 0m42.428s 00:03:17.665 user 0m12.155s 00:03:17.665 sys 0m18.502s 00:03:17.665 18:57:58 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.665 18:57:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:17.665 ************************************ 00:03:17.665 END TEST setup.sh 00:03:17.665 ************************************ 00:03:17.665 18:57:58 -- common/autotest_common.sh@1142 -- # return 0 00:03:17.665 18:57:58 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.041 Hugepages 00:03:19.041 node hugesize free / total 00:03:19.041 node0 1048576kB 0 / 0 00:03:19.041 node0 2048kB 2048 / 2048 00:03:19.041 node1 1048576kB 0 / 0 00:03:19.041 node1 2048kB 0 / 0 00:03:19.041 00:03:19.041 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.041 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:19.041 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:19.041 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:19.041 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:19.041 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:19.041 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:19.041 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:19.041 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:19.041 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:19.041 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:19.041 18:57:59 -- spdk/autotest.sh@130 -- # uname -s 00:03:19.041 18:57:59 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:19.041 18:57:59 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:19.041 18:57:59 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.422 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.422 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.422 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.422 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.422 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.422 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.422 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.422 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.422 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.387 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.387 18:58:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:22.324 18:58:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:22.324 18:58:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:22.324 18:58:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:22.324 18:58:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:22.324 18:58:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:22.324 18:58:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:22.324 18:58:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.324 18:58:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:22.324 18:58:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:22.324 18:58:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:22.324 18:58:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:22.324 18:58:02 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.700 Waiting for block devices as requested 00:03:23.700 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:23.700 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:23.700 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:23.959 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:23.959 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:23.959 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:23.959 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:24.218 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:24.218 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:24.218 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:24.218 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:24.476 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:24.476 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:24.476 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:24.734 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:24.734 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:24.734 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:24.993 18:58:05 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:24.993 18:58:05 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:24.993 18:58:05 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:24.993 18:58:05 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:24.993 18:58:05 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:24.993 18:58:05 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:24.993 18:58:05 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:24.993 18:58:05 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:24.993 18:58:05 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:24.993 18:58:05 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:24.993 18:58:05 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:24.993 18:58:05 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:24.993 18:58:05 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:24.993 18:58:05 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:24.993 18:58:05 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:24.993 18:58:05 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:24.994 18:58:05 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:24.994 18:58:05 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:24.994 18:58:05 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:24.994 18:58:05 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:24.994 18:58:05 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:24.994 18:58:05 -- common/autotest_common.sh@1557 -- # continue 00:03:24.994 18:58:05 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:24.994 18:58:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:24.994 18:58:05 -- common/autotest_common.sh@10 -- # set +x 00:03:24.994 18:58:05 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:24.994 18:58:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:24.994 18:58:05 -- common/autotest_common.sh@10 -- # set +x 00:03:24.994 18:58:05 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.370 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:26.370 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:26.370 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:26.370 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:26.370 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:26.370 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:26.370 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:26.370 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:26.370 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:26.936 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.194 18:58:07 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:27.194 18:58:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:27.194 18:58:07 -- common/autotest_common.sh@10 -- # set +x 00:03:27.194 18:58:07 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:27.194 18:58:07 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:27.194 18:58:07 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:27.194 18:58:07 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:27.194 18:58:07 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:27.194 18:58:07 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:27.194 18:58:07 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:27.194 18:58:07 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:27.194 18:58:07 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.194 18:58:07 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:27.194 18:58:07 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:27.194 18:58:07 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:27.194 18:58:07 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:27.194 18:58:07 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:27.194 18:58:07 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:27.194 18:58:07 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:27.194 18:58:07 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:27.194 18:58:07 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:27.194 18:58:07 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:27.194 18:58:07 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:27.194 18:58:07 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3177426 00:03:27.194 18:58:07 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:27.194 18:58:07 -- common/autotest_common.sh@1598 -- # waitforlisten 3177426 00:03:27.194 18:58:07 -- common/autotest_common.sh@829 -- # '[' -z 3177426 ']' 00:03:27.194 18:58:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.194 18:58:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:27.194 18:58:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.194 18:58:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:27.194 18:58:07 -- common/autotest_common.sh@10 -- # set +x 00:03:27.452 [2024-07-15 18:58:07.657028] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:03:27.452 [2024-07-15 18:58:07.657124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3177426 ] 00:03:27.452 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.452 [2024-07-15 18:58:07.715513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.452 [2024-07-15 18:58:07.824136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.711 18:58:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:27.711 18:58:08 -- common/autotest_common.sh@862 -- # return 0 00:03:27.711 18:58:08 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:27.711 18:58:08 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:27.711 18:58:08 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:30.997 nvme0n1 00:03:30.997 18:58:11 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:30.997 [2024-07-15 18:58:11.404139] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:30.997 [2024-07-15 18:58:11.404189] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:30.997 request: 00:03:30.997 { 00:03:30.997 "nvme_ctrlr_name": "nvme0", 00:03:30.997 "password": "test", 00:03:30.997 "method": "bdev_nvme_opal_revert", 00:03:30.997 "req_id": 1 00:03:30.997 } 00:03:30.997 Got JSON-RPC error response 00:03:30.997 response: 00:03:30.997 { 00:03:30.997 "code": -32603, 00:03:30.997 "message": "Internal error" 00:03:30.997 } 00:03:30.997 18:58:11 -- common/autotest_common.sh@1604 -- # true 00:03:30.997 18:58:11 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:30.997 18:58:11 -- common/autotest_common.sh@1608 -- # killprocess 3177426 00:03:30.997 18:58:11 -- common/autotest_common.sh@948 -- # '[' -z 3177426 ']' 00:03:30.997 18:58:11 -- common/autotest_common.sh@952 -- # kill -0 3177426 00:03:30.997 18:58:11 -- common/autotest_common.sh@953 -- # uname 00:03:30.997 18:58:11 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:30.997 18:58:11 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3177426 00:03:31.256 18:58:11 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:31.256 18:58:11 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:31.256 18:58:11 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3177426' 00:03:31.256 killing process with pid 3177426 00:03:31.256 18:58:11 -- common/autotest_common.sh@967 -- # kill 3177426 00:03:31.256 18:58:11 -- common/autotest_common.sh@972 -- # wait 3177426 00:03:33.157 18:58:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:33.157 18:58:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:33.157 18:58:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:33.157 18:58:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:33.157 18:58:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:33.157 18:58:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:33.157 18:58:13 -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 18:58:13 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:33.157 18:58:13 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.157 18:58:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.157 18:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.157 18:58:13 -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 ************************************ 00:03:33.157 START TEST env 00:03:33.157 ************************************ 00:03:33.157 18:58:13 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.157 * Looking for test storage... 00:03:33.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:33.157 18:58:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.157 18:58:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.157 18:58:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.157 18:58:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 ************************************ 00:03:33.157 START TEST env_memory 00:03:33.157 ************************************ 00:03:33.157 18:58:13 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.157 00:03:33.157 00:03:33.157 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.157 http://cunit.sourceforge.net/ 00:03:33.157 00:03:33.157 00:03:33.157 Suite: memory 00:03:33.157 Test: alloc and free memory map ...[2024-07-15 18:58:13.392831] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:33.157 passed 00:03:33.157 Test: mem map translation ...[2024-07-15 18:58:13.412968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:33.157 [2024-07-15 18:58:13.412990] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:33.157 [2024-07-15 18:58:13.413039] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:33.157 [2024-07-15 18:58:13.413051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:33.157 passed 00:03:33.157 Test: mem map registration ...[2024-07-15 18:58:13.455655] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:33.157 [2024-07-15 18:58:13.455677] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:33.157 passed 00:03:33.157 Test: mem map adjacent registrations ...passed 00:03:33.157 00:03:33.157 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.157 suites 1 1 n/a 0 0 00:03:33.157 tests 4 4 4 0 0 00:03:33.157 asserts 152 152 152 0 n/a 00:03:33.157 00:03:33.157 Elapsed time = 0.142 seconds 00:03:33.157 00:03:33.157 real 0m0.150s 00:03:33.157 user 0m0.141s 00:03:33.157 sys 0m0.009s 00:03:33.157 18:58:13 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.157 18:58:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 ************************************ 00:03:33.157 END TEST env_memory 00:03:33.157 ************************************ 00:03:33.157 18:58:13 env -- common/autotest_common.sh@1142 -- # return 0 00:03:33.157 18:58:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.157 18:58:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.157 18:58:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.157 18:58:13 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.157 ************************************ 00:03:33.157 START TEST env_vtophys 00:03:33.157 ************************************ 00:03:33.157 18:58:13 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.157 EAL: lib.eal log level changed from notice to debug 00:03:33.157 EAL: Detected lcore 0 as core 0 on socket 0 00:03:33.157 EAL: Detected lcore 1 as core 1 on socket 0 00:03:33.157 EAL: Detected lcore 2 as core 2 on socket 0 00:03:33.157 EAL: Detected lcore 3 as core 3 on socket 0 00:03:33.157 EAL: Detected lcore 4 as core 4 on socket 0 00:03:33.157 EAL: Detected lcore 5 as core 5 on socket 0 00:03:33.157 EAL: Detected lcore 6 as core 8 on socket 0 00:03:33.157 EAL: Detected lcore 7 as core 9 on socket 0 00:03:33.157 EAL: Detected lcore 8 as core 10 on socket 0 00:03:33.157 EAL: Detected lcore 9 as core 11 on socket 0 00:03:33.157 EAL: Detected lcore 10 as core 12 on socket 0 00:03:33.157 EAL: Detected lcore 11 as core 13 on socket 0 00:03:33.157 EAL: Detected lcore 12 as core 0 on socket 1 00:03:33.157 EAL: Detected lcore 13 as core 1 on socket 1 00:03:33.157 EAL: Detected lcore 14 as core 2 on socket 1 00:03:33.157 EAL: Detected lcore 15 as core 3 on socket 1 00:03:33.157 EAL: Detected lcore 16 as core 4 on socket 1 00:03:33.157 EAL: Detected lcore 17 as core 5 on socket 1 00:03:33.157 EAL: Detected lcore 18 as core 8 on socket 1 00:03:33.157 EAL: Detected lcore 19 as core 9 on socket 1 00:03:33.157 EAL: Detected lcore 20 as core 10 on socket 1 00:03:33.157 EAL: Detected lcore 21 as core 11 on socket 1 00:03:33.157 EAL: Detected lcore 22 as core 12 on socket 1 00:03:33.157 EAL: Detected lcore 23 as core 13 on socket 1 00:03:33.157 EAL: Detected lcore 24 as core 0 on socket 0 00:03:33.157 EAL: Detected lcore 25 as core 1 on socket 0 00:03:33.157 EAL: Detected lcore 26 as core 2 on socket 0 00:03:33.157 EAL: Detected lcore 27 as core 3 on socket 0 00:03:33.157 EAL: Detected lcore 28 as core 4 on socket 0 00:03:33.157 EAL: Detected lcore 29 as core 5 on socket 0 00:03:33.157 EAL: Detected lcore 30 as core 8 on socket 0 00:03:33.157 EAL: Detected lcore 31 as core 9 on socket 0 00:03:33.157 EAL: Detected lcore 32 as core 10 on socket 0 00:03:33.157 EAL: Detected lcore 33 as core 11 on socket 0 00:03:33.157 EAL: Detected lcore 34 as core 12 on socket 0 00:03:33.157 EAL: Detected lcore 35 as core 13 on socket 0 00:03:33.157 EAL: Detected lcore 36 as core 0 on socket 1 00:03:33.157 EAL: Detected lcore 37 as core 1 on socket 1 00:03:33.157 EAL: Detected lcore 38 as core 2 on socket 1 00:03:33.157 EAL: Detected lcore 39 as core 3 on socket 1 00:03:33.157 EAL: Detected lcore 40 as core 4 on socket 1 00:03:33.157 EAL: Detected lcore 41 as core 5 on socket 1 00:03:33.157 EAL: Detected lcore 42 as core 8 on socket 1 00:03:33.157 EAL: Detected lcore 43 as core 9 on socket 1 00:03:33.157 EAL: Detected lcore 44 as core 10 on socket 1 00:03:33.157 EAL: Detected lcore 45 as core 11 on socket 1 00:03:33.157 EAL: Detected lcore 46 as core 12 on socket 1 00:03:33.157 EAL: Detected lcore 47 as core 13 on socket 1 00:03:33.157 EAL: Maximum logical cores by configuration: 128 00:03:33.157 EAL: Detected CPU lcores: 48 00:03:33.157 EAL: Detected NUMA nodes: 2 00:03:33.157 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:33.157 EAL: Detected shared linkage of DPDK 00:03:33.157 EAL: No shared files mode enabled, IPC will be disabled 00:03:33.416 EAL: Bus pci wants IOVA as 'DC' 00:03:33.416 EAL: Buses did not request a specific IOVA mode. 00:03:33.416 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:33.416 EAL: Selected IOVA mode 'VA' 00:03:33.416 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.416 EAL: Probing VFIO support... 00:03:33.416 EAL: IOMMU type 1 (Type 1) is supported 00:03:33.416 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:33.416 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:33.416 EAL: VFIO support initialized 00:03:33.416 EAL: Ask a virtual area of 0x2e000 bytes 00:03:33.416 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:33.416 EAL: Setting up physically contiguous memory... 00:03:33.416 EAL: Setting maximum number of open files to 524288 00:03:33.416 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:33.416 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:33.416 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:33.416 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.416 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:33.416 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.416 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.416 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:33.416 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:33.416 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.416 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:33.416 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.416 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.416 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:33.416 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:33.416 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.417 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:33.417 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.417 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.417 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:33.417 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:33.417 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.417 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:33.417 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:33.417 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.417 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:33.417 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:33.417 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:33.417 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.417 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:33.417 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.417 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.417 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:33.417 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:33.417 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.417 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:33.417 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.417 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.417 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:33.417 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:33.417 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.417 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:33.417 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.417 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.417 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:33.417 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:33.417 EAL: Ask a virtual area of 0x61000 bytes 00:03:33.417 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:33.417 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:33.417 EAL: Ask a virtual area of 0x400000000 bytes 00:03:33.417 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:33.417 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:33.417 EAL: Hugepages will be freed exactly as allocated. 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: TSC frequency is ~2700000 KHz 00:03:33.417 EAL: Main lcore 0 is ready (tid=7f66a1b4aa00;cpuset=[0]) 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 0 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 2MB 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:33.417 EAL: Mem event callback 'spdk:(nil)' registered 00:03:33.417 00:03:33.417 00:03:33.417 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.417 http://cunit.sourceforge.net/ 00:03:33.417 00:03:33.417 00:03:33.417 Suite: components_suite 00:03:33.417 Test: vtophys_malloc_test ...passed 00:03:33.417 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 4 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 4MB 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was shrunk by 4MB 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 4 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 6MB 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was shrunk by 6MB 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 4 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 10MB 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was shrunk by 10MB 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 4 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 18MB 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was shrunk by 18MB 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 4 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 34MB 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was shrunk by 34MB 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 4 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 66MB 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was shrunk by 66MB 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.417 EAL: Restoring previous memory policy: 4 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was expanded by 130MB 00:03:33.417 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.417 EAL: request: mp_malloc_sync 00:03:33.417 EAL: No shared files mode enabled, IPC is disabled 00:03:33.417 EAL: Heap on socket 0 was shrunk by 130MB 00:03:33.417 EAL: Trying to obtain current memory policy. 00:03:33.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.677 EAL: Restoring previous memory policy: 4 00:03:33.677 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.677 EAL: request: mp_malloc_sync 00:03:33.677 EAL: No shared files mode enabled, IPC is disabled 00:03:33.677 EAL: Heap on socket 0 was expanded by 258MB 00:03:33.677 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.677 EAL: request: mp_malloc_sync 00:03:33.677 EAL: No shared files mode enabled, IPC is disabled 00:03:33.677 EAL: Heap on socket 0 was shrunk by 258MB 00:03:33.677 EAL: Trying to obtain current memory policy. 00:03:33.677 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:33.935 EAL: Restoring previous memory policy: 4 00:03:33.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.935 EAL: request: mp_malloc_sync 00:03:33.935 EAL: No shared files mode enabled, IPC is disabled 00:03:33.935 EAL: Heap on socket 0 was expanded by 514MB 00:03:33.935 EAL: Calling mem event callback 'spdk:(nil)' 00:03:33.935 EAL: request: mp_malloc_sync 00:03:33.935 EAL: No shared files mode enabled, IPC is disabled 00:03:33.935 EAL: Heap on socket 0 was shrunk by 514MB 00:03:33.935 EAL: Trying to obtain current memory policy. 00:03:33.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.193 EAL: Restoring previous memory policy: 4 00:03:34.193 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.193 EAL: request: mp_malloc_sync 00:03:34.193 EAL: No shared files mode enabled, IPC is disabled 00:03:34.193 EAL: Heap on socket 0 was expanded by 1026MB 00:03:34.451 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.709 EAL: request: mp_malloc_sync 00:03:34.709 EAL: No shared files mode enabled, IPC is disabled 00:03:34.709 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:34.709 passed 00:03:34.709 00:03:34.709 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.709 suites 1 1 n/a 0 0 00:03:34.709 tests 2 2 2 0 0 00:03:34.709 asserts 497 497 497 0 n/a 00:03:34.709 00:03:34.709 Elapsed time = 1.366 seconds 00:03:34.709 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.709 EAL: request: mp_malloc_sync 00:03:34.709 EAL: No shared files mode enabled, IPC is disabled 00:03:34.709 EAL: Heap on socket 0 was shrunk by 2MB 00:03:34.709 EAL: No shared files mode enabled, IPC is disabled 00:03:34.709 EAL: No shared files mode enabled, IPC is disabled 00:03:34.709 EAL: No shared files mode enabled, IPC is disabled 00:03:34.709 00:03:34.709 real 0m1.489s 00:03:34.709 user 0m0.852s 00:03:34.709 sys 0m0.601s 00:03:34.709 18:58:15 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.709 18:58:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:34.709 ************************************ 00:03:34.709 END TEST env_vtophys 00:03:34.709 ************************************ 00:03:34.709 18:58:15 env -- common/autotest_common.sh@1142 -- # return 0 00:03:34.709 18:58:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:34.709 18:58:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.709 18:58:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.709 18:58:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.709 ************************************ 00:03:34.709 START TEST env_pci 00:03:34.710 ************************************ 00:03:34.710 18:58:15 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:34.710 00:03:34.710 00:03:34.710 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.710 http://cunit.sourceforge.net/ 00:03:34.710 00:03:34.710 00:03:34.710 Suite: pci 00:03:34.710 Test: pci_hook ...[2024-07-15 18:58:15.104864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3178317 has claimed it 00:03:34.710 EAL: Cannot find device (10000:00:01.0) 00:03:34.710 EAL: Failed to attach device on primary process 00:03:34.710 passed 00:03:34.710 00:03:34.710 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.710 suites 1 1 n/a 0 0 00:03:34.710 tests 1 1 1 0 0 00:03:34.710 asserts 25 25 25 0 n/a 00:03:34.710 00:03:34.710 Elapsed time = 0.022 seconds 00:03:34.710 00:03:34.710 real 0m0.035s 00:03:34.710 user 0m0.008s 00:03:34.710 sys 0m0.026s 00:03:34.710 18:58:15 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.710 18:58:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:34.710 ************************************ 00:03:34.710 END TEST env_pci 00:03:34.710 ************************************ 00:03:34.969 18:58:15 env -- common/autotest_common.sh@1142 -- # return 0 00:03:34.969 18:58:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:34.969 18:58:15 env -- env/env.sh@15 -- # uname 00:03:34.969 18:58:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:34.969 18:58:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:34.969 18:58:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:34.969 18:58:15 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:34.969 18:58:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.969 18:58:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.969 ************************************ 00:03:34.969 START TEST env_dpdk_post_init 00:03:34.969 ************************************ 00:03:34.969 18:58:15 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:34.969 EAL: Detected CPU lcores: 48 00:03:34.969 EAL: Detected NUMA nodes: 2 00:03:34.969 EAL: Detected shared linkage of DPDK 00:03:34.969 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:34.969 EAL: Selected IOVA mode 'VA' 00:03:34.969 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.969 EAL: VFIO support initialized 00:03:34.969 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:34.969 EAL: Using IOMMU type 1 (Type 1) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:34.969 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:35.227 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:35.227 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:35.227 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:35.227 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:35.227 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:35.227 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:35.227 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:35.794 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:39.148 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:39.148 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:39.412 Starting DPDK initialization... 00:03:39.412 Starting SPDK post initialization... 00:03:39.412 SPDK NVMe probe 00:03:39.412 Attaching to 0000:88:00.0 00:03:39.412 Attached to 0000:88:00.0 00:03:39.412 Cleaning up... 00:03:39.412 00:03:39.412 real 0m4.443s 00:03:39.412 user 0m3.300s 00:03:39.412 sys 0m0.197s 00:03:39.412 18:58:19 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.412 18:58:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.412 ************************************ 00:03:39.412 END TEST env_dpdk_post_init 00:03:39.412 ************************************ 00:03:39.412 18:58:19 env -- common/autotest_common.sh@1142 -- # return 0 00:03:39.412 18:58:19 env -- env/env.sh@26 -- # uname 00:03:39.412 18:58:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:39.412 18:58:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.412 18:58:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.412 18:58:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.412 18:58:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.412 ************************************ 00:03:39.412 START TEST env_mem_callbacks 00:03:39.412 ************************************ 00:03:39.412 18:58:19 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:39.412 EAL: Detected CPU lcores: 48 00:03:39.412 EAL: Detected NUMA nodes: 2 00:03:39.412 EAL: Detected shared linkage of DPDK 00:03:39.412 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:39.412 EAL: Selected IOVA mode 'VA' 00:03:39.412 EAL: No free 2048 kB hugepages reported on node 1 00:03:39.412 EAL: VFIO support initialized 00:03:39.412 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:39.412 00:03:39.412 00:03:39.412 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.412 http://cunit.sourceforge.net/ 00:03:39.412 00:03:39.412 00:03:39.412 Suite: memory 00:03:39.412 Test: test ... 00:03:39.412 register 0x200000200000 2097152 00:03:39.412 malloc 3145728 00:03:39.412 register 0x200000400000 4194304 00:03:39.412 buf 0x200000500000 len 3145728 PASSED 00:03:39.412 malloc 64 00:03:39.412 buf 0x2000004fff40 len 64 PASSED 00:03:39.412 malloc 4194304 00:03:39.412 register 0x200000800000 6291456 00:03:39.412 buf 0x200000a00000 len 4194304 PASSED 00:03:39.412 free 0x200000500000 3145728 00:03:39.412 free 0x2000004fff40 64 00:03:39.412 unregister 0x200000400000 4194304 PASSED 00:03:39.412 free 0x200000a00000 4194304 00:03:39.412 unregister 0x200000800000 6291456 PASSED 00:03:39.412 malloc 8388608 00:03:39.412 register 0x200000400000 10485760 00:03:39.412 buf 0x200000600000 len 8388608 PASSED 00:03:39.412 free 0x200000600000 8388608 00:03:39.412 unregister 0x200000400000 10485760 PASSED 00:03:39.412 passed 00:03:39.412 00:03:39.412 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.412 suites 1 1 n/a 0 0 00:03:39.412 tests 1 1 1 0 0 00:03:39.412 asserts 15 15 15 0 n/a 00:03:39.412 00:03:39.412 Elapsed time = 0.005 seconds 00:03:39.412 00:03:39.412 real 0m0.048s 00:03:39.412 user 0m0.018s 00:03:39.412 sys 0m0.030s 00:03:39.412 18:58:19 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.412 18:58:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:39.412 ************************************ 00:03:39.412 END TEST env_mem_callbacks 00:03:39.412 ************************************ 00:03:39.412 18:58:19 env -- common/autotest_common.sh@1142 -- # return 0 00:03:39.412 00:03:39.412 real 0m6.456s 00:03:39.412 user 0m4.436s 00:03:39.412 sys 0m1.053s 00:03:39.412 18:58:19 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.412 18:58:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:39.413 ************************************ 00:03:39.413 END TEST env 00:03:39.413 ************************************ 00:03:39.413 18:58:19 -- common/autotest_common.sh@1142 -- # return 0 00:03:39.413 18:58:19 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.413 18:58:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.413 18:58:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.413 18:58:19 -- common/autotest_common.sh@10 -- # set +x 00:03:39.413 ************************************ 00:03:39.413 START TEST rpc 00:03:39.413 ************************************ 00:03:39.413 18:58:19 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:39.413 * Looking for test storage... 00:03:39.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.413 18:58:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3179089 00:03:39.413 18:58:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:39.413 18:58:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:39.413 18:58:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3179089 00:03:39.413 18:58:19 rpc -- common/autotest_common.sh@829 -- # '[' -z 3179089 ']' 00:03:39.413 18:58:19 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:39.413 18:58:19 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:39.413 18:58:19 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:39.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:39.413 18:58:19 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:39.413 18:58:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.671 [2024-07-15 18:58:19.886990] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:03:39.671 [2024-07-15 18:58:19.887092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179089 ] 00:03:39.671 EAL: No free 2048 kB hugepages reported on node 1 00:03:39.671 [2024-07-15 18:58:19.943516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.671 [2024-07-15 18:58:20.051002] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:39.671 [2024-07-15 18:58:20.051066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3179089' to capture a snapshot of events at runtime. 00:03:39.671 [2024-07-15 18:58:20.051081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:39.671 [2024-07-15 18:58:20.051093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:39.671 [2024-07-15 18:58:20.051103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3179089 for offline analysis/debug. 00:03:39.671 [2024-07-15 18:58:20.051130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.929 18:58:20 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:39.929 18:58:20 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:39.929 18:58:20 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.929 18:58:20 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:39.929 18:58:20 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:39.929 18:58:20 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:39.929 18:58:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.929 18:58:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.929 18:58:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.929 ************************************ 00:03:39.929 START TEST rpc_integrity 00:03:39.929 ************************************ 00:03:39.929 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:39.929 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:39.929 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:39.929 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:39.929 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:39.929 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:39.929 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:40.187 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.187 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.187 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:40.187 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.187 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.187 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:40.187 { 00:03:40.187 "name": "Malloc0", 00:03:40.187 "aliases": [ 00:03:40.187 "0509419a-902d-4437-b28d-7129ee362d66" 00:03:40.187 ], 00:03:40.187 "product_name": "Malloc disk", 00:03:40.187 "block_size": 512, 00:03:40.187 "num_blocks": 16384, 00:03:40.187 "uuid": "0509419a-902d-4437-b28d-7129ee362d66", 00:03:40.187 "assigned_rate_limits": { 00:03:40.187 "rw_ios_per_sec": 0, 00:03:40.187 "rw_mbytes_per_sec": 0, 00:03:40.187 "r_mbytes_per_sec": 0, 00:03:40.187 "w_mbytes_per_sec": 0 00:03:40.187 }, 00:03:40.187 "claimed": false, 00:03:40.187 "zoned": false, 00:03:40.187 "supported_io_types": { 00:03:40.187 "read": true, 00:03:40.187 "write": true, 00:03:40.187 "unmap": true, 00:03:40.187 "flush": true, 00:03:40.187 "reset": true, 00:03:40.187 "nvme_admin": false, 00:03:40.187 "nvme_io": false, 00:03:40.187 "nvme_io_md": false, 00:03:40.187 "write_zeroes": true, 00:03:40.187 "zcopy": true, 00:03:40.187 "get_zone_info": false, 00:03:40.187 "zone_management": false, 00:03:40.187 "zone_append": false, 00:03:40.187 "compare": false, 00:03:40.187 "compare_and_write": false, 00:03:40.187 "abort": true, 00:03:40.187 "seek_hole": false, 00:03:40.187 "seek_data": false, 00:03:40.187 "copy": true, 00:03:40.187 "nvme_iov_md": false 00:03:40.187 }, 00:03:40.187 "memory_domains": [ 00:03:40.187 { 00:03:40.187 "dma_device_id": "system", 00:03:40.187 "dma_device_type": 1 00:03:40.187 }, 00:03:40.187 { 00:03:40.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.187 "dma_device_type": 2 00:03:40.187 } 00:03:40.187 ], 00:03:40.187 "driver_specific": {} 00:03:40.187 } 00:03:40.187 ]' 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:40.187 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:40.187 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.188 [2024-07-15 18:58:20.452628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:40.188 [2024-07-15 18:58:20.452673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:40.188 [2024-07-15 18:58:20.452697] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23d8d50 00:03:40.188 [2024-07-15 18:58:20.452713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:40.188 [2024-07-15 18:58:20.454418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:40.188 [2024-07-15 18:58:20.454446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:40.188 Passthru0 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:40.188 { 00:03:40.188 "name": "Malloc0", 00:03:40.188 "aliases": [ 00:03:40.188 "0509419a-902d-4437-b28d-7129ee362d66" 00:03:40.188 ], 00:03:40.188 "product_name": "Malloc disk", 00:03:40.188 "block_size": 512, 00:03:40.188 "num_blocks": 16384, 00:03:40.188 "uuid": "0509419a-902d-4437-b28d-7129ee362d66", 00:03:40.188 "assigned_rate_limits": { 00:03:40.188 "rw_ios_per_sec": 0, 00:03:40.188 "rw_mbytes_per_sec": 0, 00:03:40.188 "r_mbytes_per_sec": 0, 00:03:40.188 "w_mbytes_per_sec": 0 00:03:40.188 }, 00:03:40.188 "claimed": true, 00:03:40.188 "claim_type": "exclusive_write", 00:03:40.188 "zoned": false, 00:03:40.188 "supported_io_types": { 00:03:40.188 "read": true, 00:03:40.188 "write": true, 00:03:40.188 "unmap": true, 00:03:40.188 "flush": true, 00:03:40.188 "reset": true, 00:03:40.188 "nvme_admin": false, 00:03:40.188 "nvme_io": false, 00:03:40.188 "nvme_io_md": false, 00:03:40.188 "write_zeroes": true, 00:03:40.188 "zcopy": true, 00:03:40.188 "get_zone_info": false, 00:03:40.188 "zone_management": false, 00:03:40.188 "zone_append": false, 00:03:40.188 "compare": false, 00:03:40.188 "compare_and_write": false, 00:03:40.188 "abort": true, 00:03:40.188 "seek_hole": false, 00:03:40.188 "seek_data": false, 00:03:40.188 "copy": true, 00:03:40.188 "nvme_iov_md": false 00:03:40.188 }, 00:03:40.188 "memory_domains": [ 00:03:40.188 { 00:03:40.188 "dma_device_id": "system", 00:03:40.188 "dma_device_type": 1 00:03:40.188 }, 00:03:40.188 { 00:03:40.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.188 "dma_device_type": 2 00:03:40.188 } 00:03:40.188 ], 00:03:40.188 "driver_specific": {} 00:03:40.188 }, 00:03:40.188 { 00:03:40.188 "name": "Passthru0", 00:03:40.188 "aliases": [ 00:03:40.188 "d267f69b-e7da-520e-8dd0-3f090fa577c3" 00:03:40.188 ], 00:03:40.188 "product_name": "passthru", 00:03:40.188 "block_size": 512, 00:03:40.188 "num_blocks": 16384, 00:03:40.188 "uuid": "d267f69b-e7da-520e-8dd0-3f090fa577c3", 00:03:40.188 "assigned_rate_limits": { 00:03:40.188 "rw_ios_per_sec": 0, 00:03:40.188 "rw_mbytes_per_sec": 0, 00:03:40.188 "r_mbytes_per_sec": 0, 00:03:40.188 "w_mbytes_per_sec": 0 00:03:40.188 }, 00:03:40.188 "claimed": false, 00:03:40.188 "zoned": false, 00:03:40.188 "supported_io_types": { 00:03:40.188 "read": true, 00:03:40.188 "write": true, 00:03:40.188 "unmap": true, 00:03:40.188 "flush": true, 00:03:40.188 "reset": true, 00:03:40.188 "nvme_admin": false, 00:03:40.188 "nvme_io": false, 00:03:40.188 "nvme_io_md": false, 00:03:40.188 "write_zeroes": true, 00:03:40.188 "zcopy": true, 00:03:40.188 "get_zone_info": false, 00:03:40.188 "zone_management": false, 00:03:40.188 "zone_append": false, 00:03:40.188 "compare": false, 00:03:40.188 "compare_and_write": false, 00:03:40.188 "abort": true, 00:03:40.188 "seek_hole": false, 00:03:40.188 "seek_data": false, 00:03:40.188 "copy": true, 00:03:40.188 "nvme_iov_md": false 00:03:40.188 }, 00:03:40.188 "memory_domains": [ 00:03:40.188 { 00:03:40.188 "dma_device_id": "system", 00:03:40.188 "dma_device_type": 1 00:03:40.188 }, 00:03:40.188 { 00:03:40.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.188 "dma_device_type": 2 00:03:40.188 } 00:03:40.188 ], 00:03:40.188 "driver_specific": { 00:03:40.188 "passthru": { 00:03:40.188 "name": "Passthru0", 00:03:40.188 "base_bdev_name": "Malloc0" 00:03:40.188 } 00:03:40.188 } 00:03:40.188 } 00:03:40.188 ]' 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:40.188 18:58:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:40.188 00:03:40.188 real 0m0.228s 00:03:40.188 user 0m0.153s 00:03:40.188 sys 0m0.018s 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.188 18:58:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.188 ************************************ 00:03:40.188 END TEST rpc_integrity 00:03:40.188 ************************************ 00:03:40.188 18:58:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:40.188 18:58:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:40.188 18:58:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.188 18:58:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.188 18:58:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 ************************************ 00:03:40.446 START TEST rpc_plugins 00:03:40.446 ************************************ 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:40.446 { 00:03:40.446 "name": "Malloc1", 00:03:40.446 "aliases": [ 00:03:40.446 "2bdea4dc-dfca-4d60-b098-66e36e49aa90" 00:03:40.446 ], 00:03:40.446 "product_name": "Malloc disk", 00:03:40.446 "block_size": 4096, 00:03:40.446 "num_blocks": 256, 00:03:40.446 "uuid": "2bdea4dc-dfca-4d60-b098-66e36e49aa90", 00:03:40.446 "assigned_rate_limits": { 00:03:40.446 "rw_ios_per_sec": 0, 00:03:40.446 "rw_mbytes_per_sec": 0, 00:03:40.446 "r_mbytes_per_sec": 0, 00:03:40.446 "w_mbytes_per_sec": 0 00:03:40.446 }, 00:03:40.446 "claimed": false, 00:03:40.446 "zoned": false, 00:03:40.446 "supported_io_types": { 00:03:40.446 "read": true, 00:03:40.446 "write": true, 00:03:40.446 "unmap": true, 00:03:40.446 "flush": true, 00:03:40.446 "reset": true, 00:03:40.446 "nvme_admin": false, 00:03:40.446 "nvme_io": false, 00:03:40.446 "nvme_io_md": false, 00:03:40.446 "write_zeroes": true, 00:03:40.446 "zcopy": true, 00:03:40.446 "get_zone_info": false, 00:03:40.446 "zone_management": false, 00:03:40.446 "zone_append": false, 00:03:40.446 "compare": false, 00:03:40.446 "compare_and_write": false, 00:03:40.446 "abort": true, 00:03:40.446 "seek_hole": false, 00:03:40.446 "seek_data": false, 00:03:40.446 "copy": true, 00:03:40.446 "nvme_iov_md": false 00:03:40.446 }, 00:03:40.446 "memory_domains": [ 00:03:40.446 { 00:03:40.446 "dma_device_id": "system", 00:03:40.446 "dma_device_type": 1 00:03:40.446 }, 00:03:40.446 { 00:03:40.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.446 "dma_device_type": 2 00:03:40.446 } 00:03:40.446 ], 00:03:40.446 "driver_specific": {} 00:03:40.446 } 00:03:40.446 ]' 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:40.446 18:58:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:40.446 00:03:40.446 real 0m0.111s 00:03:40.446 user 0m0.067s 00:03:40.446 sys 0m0.013s 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.446 18:58:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 ************************************ 00:03:40.446 END TEST rpc_plugins 00:03:40.446 ************************************ 00:03:40.446 18:58:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:40.446 18:58:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:40.446 18:58:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.446 18:58:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.446 18:58:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 ************************************ 00:03:40.446 START TEST rpc_trace_cmd_test 00:03:40.446 ************************************ 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:40.446 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3179089", 00:03:40.446 "tpoint_group_mask": "0x8", 00:03:40.446 "iscsi_conn": { 00:03:40.446 "mask": "0x2", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "scsi": { 00:03:40.446 "mask": "0x4", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "bdev": { 00:03:40.446 "mask": "0x8", 00:03:40.446 "tpoint_mask": "0xffffffffffffffff" 00:03:40.446 }, 00:03:40.446 "nvmf_rdma": { 00:03:40.446 "mask": "0x10", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "nvmf_tcp": { 00:03:40.446 "mask": "0x20", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "ftl": { 00:03:40.446 "mask": "0x40", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "blobfs": { 00:03:40.446 "mask": "0x80", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "dsa": { 00:03:40.446 "mask": "0x200", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "thread": { 00:03:40.446 "mask": "0x400", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "nvme_pcie": { 00:03:40.446 "mask": "0x800", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "iaa": { 00:03:40.446 "mask": "0x1000", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "nvme_tcp": { 00:03:40.446 "mask": "0x2000", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "bdev_nvme": { 00:03:40.446 "mask": "0x4000", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 }, 00:03:40.446 "sock": { 00:03:40.446 "mask": "0x8000", 00:03:40.446 "tpoint_mask": "0x0" 00:03:40.446 } 00:03:40.446 }' 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:40.446 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:40.704 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:40.704 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:40.704 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:40.704 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:40.704 18:58:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:40.704 00:03:40.704 real 0m0.196s 00:03:40.704 user 0m0.172s 00:03:40.704 sys 0m0.016s 00:03:40.704 18:58:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.704 18:58:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:40.704 ************************************ 00:03:40.704 END TEST rpc_trace_cmd_test 00:03:40.704 ************************************ 00:03:40.704 18:58:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:40.704 18:58:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:40.704 18:58:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:40.704 18:58:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:40.704 18:58:20 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.704 18:58:20 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.704 18:58:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.704 ************************************ 00:03:40.704 START TEST rpc_daemon_integrity 00:03:40.704 ************************************ 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.704 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:40.705 { 00:03:40.705 "name": "Malloc2", 00:03:40.705 "aliases": [ 00:03:40.705 "4b55c388-5491-486d-8094-65f422f7a567" 00:03:40.705 ], 00:03:40.705 "product_name": "Malloc disk", 00:03:40.705 "block_size": 512, 00:03:40.705 "num_blocks": 16384, 00:03:40.705 "uuid": "4b55c388-5491-486d-8094-65f422f7a567", 00:03:40.705 "assigned_rate_limits": { 00:03:40.705 "rw_ios_per_sec": 0, 00:03:40.705 "rw_mbytes_per_sec": 0, 00:03:40.705 "r_mbytes_per_sec": 0, 00:03:40.705 "w_mbytes_per_sec": 0 00:03:40.705 }, 00:03:40.705 "claimed": false, 00:03:40.705 "zoned": false, 00:03:40.705 "supported_io_types": { 00:03:40.705 "read": true, 00:03:40.705 "write": true, 00:03:40.705 "unmap": true, 00:03:40.705 "flush": true, 00:03:40.705 "reset": true, 00:03:40.705 "nvme_admin": false, 00:03:40.705 "nvme_io": false, 00:03:40.705 "nvme_io_md": false, 00:03:40.705 "write_zeroes": true, 00:03:40.705 "zcopy": true, 00:03:40.705 "get_zone_info": false, 00:03:40.705 "zone_management": false, 00:03:40.705 "zone_append": false, 00:03:40.705 "compare": false, 00:03:40.705 "compare_and_write": false, 00:03:40.705 "abort": true, 00:03:40.705 "seek_hole": false, 00:03:40.705 "seek_data": false, 00:03:40.705 "copy": true, 00:03:40.705 "nvme_iov_md": false 00:03:40.705 }, 00:03:40.705 "memory_domains": [ 00:03:40.705 { 00:03:40.705 "dma_device_id": "system", 00:03:40.705 "dma_device_type": 1 00:03:40.705 }, 00:03:40.705 { 00:03:40.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.705 "dma_device_type": 2 00:03:40.705 } 00:03:40.705 ], 00:03:40.705 "driver_specific": {} 00:03:40.705 } 00:03:40.705 ]' 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.705 [2024-07-15 18:58:21.122563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:40.705 [2024-07-15 18:58:21.122607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:40.705 [2024-07-15 18:58:21.122631] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23d9c00 00:03:40.705 [2024-07-15 18:58:21.122647] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:40.705 [2024-07-15 18:58:21.123948] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:40.705 [2024-07-15 18:58:21.123974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:40.705 Passthru0 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.705 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:40.963 { 00:03:40.963 "name": "Malloc2", 00:03:40.963 "aliases": [ 00:03:40.963 "4b55c388-5491-486d-8094-65f422f7a567" 00:03:40.963 ], 00:03:40.963 "product_name": "Malloc disk", 00:03:40.963 "block_size": 512, 00:03:40.963 "num_blocks": 16384, 00:03:40.963 "uuid": "4b55c388-5491-486d-8094-65f422f7a567", 00:03:40.963 "assigned_rate_limits": { 00:03:40.963 "rw_ios_per_sec": 0, 00:03:40.963 "rw_mbytes_per_sec": 0, 00:03:40.963 "r_mbytes_per_sec": 0, 00:03:40.963 "w_mbytes_per_sec": 0 00:03:40.963 }, 00:03:40.963 "claimed": true, 00:03:40.963 "claim_type": "exclusive_write", 00:03:40.963 "zoned": false, 00:03:40.963 "supported_io_types": { 00:03:40.963 "read": true, 00:03:40.963 "write": true, 00:03:40.963 "unmap": true, 00:03:40.963 "flush": true, 00:03:40.963 "reset": true, 00:03:40.963 "nvme_admin": false, 00:03:40.963 "nvme_io": false, 00:03:40.963 "nvme_io_md": false, 00:03:40.963 "write_zeroes": true, 00:03:40.963 "zcopy": true, 00:03:40.963 "get_zone_info": false, 00:03:40.963 "zone_management": false, 00:03:40.963 "zone_append": false, 00:03:40.963 "compare": false, 00:03:40.963 "compare_and_write": false, 00:03:40.963 "abort": true, 00:03:40.963 "seek_hole": false, 00:03:40.963 "seek_data": false, 00:03:40.963 "copy": true, 00:03:40.963 "nvme_iov_md": false 00:03:40.963 }, 00:03:40.963 "memory_domains": [ 00:03:40.963 { 00:03:40.963 "dma_device_id": "system", 00:03:40.963 "dma_device_type": 1 00:03:40.963 }, 00:03:40.963 { 00:03:40.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.963 "dma_device_type": 2 00:03:40.963 } 00:03:40.963 ], 00:03:40.963 "driver_specific": {} 00:03:40.963 }, 00:03:40.963 { 00:03:40.963 "name": "Passthru0", 00:03:40.963 "aliases": [ 00:03:40.963 "b8d03c8b-97b5-536a-b42e-c1a3d0c5741d" 00:03:40.963 ], 00:03:40.963 "product_name": "passthru", 00:03:40.963 "block_size": 512, 00:03:40.963 "num_blocks": 16384, 00:03:40.963 "uuid": "b8d03c8b-97b5-536a-b42e-c1a3d0c5741d", 00:03:40.963 "assigned_rate_limits": { 00:03:40.963 "rw_ios_per_sec": 0, 00:03:40.963 "rw_mbytes_per_sec": 0, 00:03:40.963 "r_mbytes_per_sec": 0, 00:03:40.963 "w_mbytes_per_sec": 0 00:03:40.963 }, 00:03:40.963 "claimed": false, 00:03:40.963 "zoned": false, 00:03:40.963 "supported_io_types": { 00:03:40.963 "read": true, 00:03:40.963 "write": true, 00:03:40.963 "unmap": true, 00:03:40.963 "flush": true, 00:03:40.963 "reset": true, 00:03:40.963 "nvme_admin": false, 00:03:40.963 "nvme_io": false, 00:03:40.963 "nvme_io_md": false, 00:03:40.963 "write_zeroes": true, 00:03:40.963 "zcopy": true, 00:03:40.963 "get_zone_info": false, 00:03:40.963 "zone_management": false, 00:03:40.963 "zone_append": false, 00:03:40.963 "compare": false, 00:03:40.963 "compare_and_write": false, 00:03:40.963 "abort": true, 00:03:40.963 "seek_hole": false, 00:03:40.963 "seek_data": false, 00:03:40.963 "copy": true, 00:03:40.963 "nvme_iov_md": false 00:03:40.963 }, 00:03:40.963 "memory_domains": [ 00:03:40.963 { 00:03:40.963 "dma_device_id": "system", 00:03:40.963 "dma_device_type": 1 00:03:40.963 }, 00:03:40.963 { 00:03:40.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.963 "dma_device_type": 2 00:03:40.963 } 00:03:40.963 ], 00:03:40.963 "driver_specific": { 00:03:40.963 "passthru": { 00:03:40.963 "name": "Passthru0", 00:03:40.963 "base_bdev_name": "Malloc2" 00:03:40.963 } 00:03:40.963 } 00:03:40.963 } 00:03:40.963 ]' 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.963 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:40.964 00:03:40.964 real 0m0.224s 00:03:40.964 user 0m0.148s 00:03:40.964 sys 0m0.026s 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.964 18:58:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.964 ************************************ 00:03:40.964 END TEST rpc_daemon_integrity 00:03:40.964 ************************************ 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:40.964 18:58:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:40.964 18:58:21 rpc -- rpc/rpc.sh@84 -- # killprocess 3179089 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@948 -- # '[' -z 3179089 ']' 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@952 -- # kill -0 3179089 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@953 -- # uname 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3179089 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3179089' 00:03:40.964 killing process with pid 3179089 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@967 -- # kill 3179089 00:03:40.964 18:58:21 rpc -- common/autotest_common.sh@972 -- # wait 3179089 00:03:41.529 00:03:41.529 real 0m1.968s 00:03:41.529 user 0m2.455s 00:03:41.529 sys 0m0.581s 00:03:41.529 18:58:21 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.529 18:58:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.529 ************************************ 00:03:41.529 END TEST rpc 00:03:41.529 ************************************ 00:03:41.529 18:58:21 -- common/autotest_common.sh@1142 -- # return 0 00:03:41.529 18:58:21 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:41.529 18:58:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.529 18:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.529 18:58:21 -- common/autotest_common.sh@10 -- # set +x 00:03:41.529 ************************************ 00:03:41.529 START TEST skip_rpc 00:03:41.529 ************************************ 00:03:41.529 18:58:21 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:41.529 * Looking for test storage... 00:03:41.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.529 18:58:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.529 18:58:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:41.529 18:58:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:41.529 18:58:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.529 18:58:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.529 18:58:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.529 ************************************ 00:03:41.529 START TEST skip_rpc 00:03:41.529 ************************************ 00:03:41.529 18:58:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:41.529 18:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3179413 00:03:41.529 18:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:41.529 18:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.529 18:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:41.529 [2024-07-15 18:58:21.928238] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:03:41.529 [2024-07-15 18:58:21.928308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3179413 ] 00:03:41.529 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.787 [2024-07-15 18:58:21.987010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.787 [2024-07-15 18:58:22.108456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3179413 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3179413 ']' 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3179413 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3179413 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3179413' 00:03:47.045 killing process with pid 3179413 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3179413 00:03:47.045 18:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3179413 00:03:47.045 00:03:47.045 real 0m5.485s 00:03:47.045 user 0m5.163s 00:03:47.045 sys 0m0.324s 00:03:47.045 18:58:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.045 18:58:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.045 ************************************ 00:03:47.045 END TEST skip_rpc 00:03:47.045 ************************************ 00:03:47.045 18:58:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:47.045 18:58:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:47.045 18:58:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.045 18:58:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.045 18:58:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.045 ************************************ 00:03:47.045 START TEST skip_rpc_with_json 00:03:47.045 ************************************ 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3180106 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3180106 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3180106 ']' 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:47.045 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.045 [2024-07-15 18:58:27.461852] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:03:47.045 [2024-07-15 18:58:27.461963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3180106 ] 00:03:47.304 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.304 [2024-07-15 18:58:27.520152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.304 [2024-07-15 18:58:27.629274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.562 [2024-07-15 18:58:27.887909] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:47.562 request: 00:03:47.562 { 00:03:47.562 "trtype": "tcp", 00:03:47.562 "method": "nvmf_get_transports", 00:03:47.562 "req_id": 1 00:03:47.562 } 00:03:47.562 Got JSON-RPC error response 00:03:47.562 response: 00:03:47.562 { 00:03:47.562 "code": -19, 00:03:47.562 "message": "No such device" 00:03:47.562 } 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.562 [2024-07-15 18:58:27.896032] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:47.562 18:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.821 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:47.821 18:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:47.821 { 00:03:47.821 "subsystems": [ 00:03:47.821 { 00:03:47.821 "subsystem": "vfio_user_target", 00:03:47.821 "config": null 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "keyring", 00:03:47.821 "config": [] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "iobuf", 00:03:47.821 "config": [ 00:03:47.821 { 00:03:47.821 "method": "iobuf_set_options", 00:03:47.821 "params": { 00:03:47.821 "small_pool_count": 8192, 00:03:47.821 "large_pool_count": 1024, 00:03:47.821 "small_bufsize": 8192, 00:03:47.821 "large_bufsize": 135168 00:03:47.821 } 00:03:47.821 } 00:03:47.821 ] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "sock", 00:03:47.821 "config": [ 00:03:47.821 { 00:03:47.821 "method": "sock_set_default_impl", 00:03:47.821 "params": { 00:03:47.821 "impl_name": "posix" 00:03:47.821 } 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "method": "sock_impl_set_options", 00:03:47.821 "params": { 00:03:47.821 "impl_name": "ssl", 00:03:47.821 "recv_buf_size": 4096, 00:03:47.821 "send_buf_size": 4096, 00:03:47.821 "enable_recv_pipe": true, 00:03:47.821 "enable_quickack": false, 00:03:47.821 "enable_placement_id": 0, 00:03:47.821 "enable_zerocopy_send_server": true, 00:03:47.821 "enable_zerocopy_send_client": false, 00:03:47.821 "zerocopy_threshold": 0, 00:03:47.821 "tls_version": 0, 00:03:47.821 "enable_ktls": false 00:03:47.821 } 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "method": "sock_impl_set_options", 00:03:47.821 "params": { 00:03:47.821 "impl_name": "posix", 00:03:47.821 "recv_buf_size": 2097152, 00:03:47.821 "send_buf_size": 2097152, 00:03:47.821 "enable_recv_pipe": true, 00:03:47.821 "enable_quickack": false, 00:03:47.821 "enable_placement_id": 0, 00:03:47.821 "enable_zerocopy_send_server": true, 00:03:47.821 "enable_zerocopy_send_client": false, 00:03:47.821 "zerocopy_threshold": 0, 00:03:47.821 "tls_version": 0, 00:03:47.821 "enable_ktls": false 00:03:47.821 } 00:03:47.821 } 00:03:47.821 ] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "vmd", 00:03:47.821 "config": [] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "accel", 00:03:47.821 "config": [ 00:03:47.821 { 00:03:47.821 "method": "accel_set_options", 00:03:47.821 "params": { 00:03:47.821 "small_cache_size": 128, 00:03:47.821 "large_cache_size": 16, 00:03:47.821 "task_count": 2048, 00:03:47.821 "sequence_count": 2048, 00:03:47.821 "buf_count": 2048 00:03:47.821 } 00:03:47.821 } 00:03:47.821 ] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "bdev", 00:03:47.821 "config": [ 00:03:47.821 { 00:03:47.821 "method": "bdev_set_options", 00:03:47.821 "params": { 00:03:47.821 "bdev_io_pool_size": 65535, 00:03:47.821 "bdev_io_cache_size": 256, 00:03:47.821 "bdev_auto_examine": true, 00:03:47.821 "iobuf_small_cache_size": 128, 00:03:47.821 "iobuf_large_cache_size": 16 00:03:47.821 } 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "method": "bdev_raid_set_options", 00:03:47.821 "params": { 00:03:47.821 "process_window_size_kb": 1024 00:03:47.821 } 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "method": "bdev_iscsi_set_options", 00:03:47.821 "params": { 00:03:47.821 "timeout_sec": 30 00:03:47.821 } 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "method": "bdev_nvme_set_options", 00:03:47.821 "params": { 00:03:47.821 "action_on_timeout": "none", 00:03:47.821 "timeout_us": 0, 00:03:47.821 "timeout_admin_us": 0, 00:03:47.821 "keep_alive_timeout_ms": 10000, 00:03:47.821 "arbitration_burst": 0, 00:03:47.821 "low_priority_weight": 0, 00:03:47.821 "medium_priority_weight": 0, 00:03:47.821 "high_priority_weight": 0, 00:03:47.821 "nvme_adminq_poll_period_us": 10000, 00:03:47.821 "nvme_ioq_poll_period_us": 0, 00:03:47.821 "io_queue_requests": 0, 00:03:47.821 "delay_cmd_submit": true, 00:03:47.821 "transport_retry_count": 4, 00:03:47.821 "bdev_retry_count": 3, 00:03:47.821 "transport_ack_timeout": 0, 00:03:47.821 "ctrlr_loss_timeout_sec": 0, 00:03:47.821 "reconnect_delay_sec": 0, 00:03:47.821 "fast_io_fail_timeout_sec": 0, 00:03:47.821 "disable_auto_failback": false, 00:03:47.821 "generate_uuids": false, 00:03:47.821 "transport_tos": 0, 00:03:47.821 "nvme_error_stat": false, 00:03:47.821 "rdma_srq_size": 0, 00:03:47.821 "io_path_stat": false, 00:03:47.821 "allow_accel_sequence": false, 00:03:47.821 "rdma_max_cq_size": 0, 00:03:47.821 "rdma_cm_event_timeout_ms": 0, 00:03:47.821 "dhchap_digests": [ 00:03:47.821 "sha256", 00:03:47.821 "sha384", 00:03:47.821 "sha512" 00:03:47.821 ], 00:03:47.821 "dhchap_dhgroups": [ 00:03:47.821 "null", 00:03:47.821 "ffdhe2048", 00:03:47.821 "ffdhe3072", 00:03:47.821 "ffdhe4096", 00:03:47.821 "ffdhe6144", 00:03:47.821 "ffdhe8192" 00:03:47.821 ] 00:03:47.821 } 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "method": "bdev_nvme_set_hotplug", 00:03:47.821 "params": { 00:03:47.821 "period_us": 100000, 00:03:47.821 "enable": false 00:03:47.821 } 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "method": "bdev_wait_for_examine" 00:03:47.821 } 00:03:47.821 ] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "scsi", 00:03:47.821 "config": null 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "scheduler", 00:03:47.821 "config": [ 00:03:47.821 { 00:03:47.821 "method": "framework_set_scheduler", 00:03:47.821 "params": { 00:03:47.821 "name": "static" 00:03:47.821 } 00:03:47.821 } 00:03:47.821 ] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "vhost_scsi", 00:03:47.821 "config": [] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "vhost_blk", 00:03:47.821 "config": [] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "ublk", 00:03:47.821 "config": [] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "nbd", 00:03:47.821 "config": [] 00:03:47.821 }, 00:03:47.821 { 00:03:47.821 "subsystem": "nvmf", 00:03:47.821 "config": [ 00:03:47.821 { 00:03:47.821 "method": "nvmf_set_config", 00:03:47.821 "params": { 00:03:47.821 "discovery_filter": "match_any", 00:03:47.822 "admin_cmd_passthru": { 00:03:47.822 "identify_ctrlr": false 00:03:47.822 } 00:03:47.822 } 00:03:47.822 }, 00:03:47.822 { 00:03:47.822 "method": "nvmf_set_max_subsystems", 00:03:47.822 "params": { 00:03:47.822 "max_subsystems": 1024 00:03:47.822 } 00:03:47.822 }, 00:03:47.822 { 00:03:47.822 "method": "nvmf_set_crdt", 00:03:47.822 "params": { 00:03:47.822 "crdt1": 0, 00:03:47.822 "crdt2": 0, 00:03:47.822 "crdt3": 0 00:03:47.822 } 00:03:47.822 }, 00:03:47.822 { 00:03:47.822 "method": "nvmf_create_transport", 00:03:47.822 "params": { 00:03:47.822 "trtype": "TCP", 00:03:47.822 "max_queue_depth": 128, 00:03:47.822 "max_io_qpairs_per_ctrlr": 127, 00:03:47.822 "in_capsule_data_size": 4096, 00:03:47.822 "max_io_size": 131072, 00:03:47.822 "io_unit_size": 131072, 00:03:47.822 "max_aq_depth": 128, 00:03:47.822 "num_shared_buffers": 511, 00:03:47.822 "buf_cache_size": 4294967295, 00:03:47.822 "dif_insert_or_strip": false, 00:03:47.822 "zcopy": false, 00:03:47.822 "c2h_success": true, 00:03:47.822 "sock_priority": 0, 00:03:47.822 "abort_timeout_sec": 1, 00:03:47.822 "ack_timeout": 0, 00:03:47.822 "data_wr_pool_size": 0 00:03:47.822 } 00:03:47.822 } 00:03:47.822 ] 00:03:47.822 }, 00:03:47.822 { 00:03:47.822 "subsystem": "iscsi", 00:03:47.822 "config": [ 00:03:47.822 { 00:03:47.822 "method": "iscsi_set_options", 00:03:47.822 "params": { 00:03:47.822 "node_base": "iqn.2016-06.io.spdk", 00:03:47.822 "max_sessions": 128, 00:03:47.822 "max_connections_per_session": 2, 00:03:47.822 "max_queue_depth": 64, 00:03:47.822 "default_time2wait": 2, 00:03:47.822 "default_time2retain": 20, 00:03:47.822 "first_burst_length": 8192, 00:03:47.822 "immediate_data": true, 00:03:47.822 "allow_duplicated_isid": false, 00:03:47.822 "error_recovery_level": 0, 00:03:47.822 "nop_timeout": 60, 00:03:47.822 "nop_in_interval": 30, 00:03:47.822 "disable_chap": false, 00:03:47.822 "require_chap": false, 00:03:47.822 "mutual_chap": false, 00:03:47.822 "chap_group": 0, 00:03:47.822 "max_large_datain_per_connection": 64, 00:03:47.822 "max_r2t_per_connection": 4, 00:03:47.822 "pdu_pool_size": 36864, 00:03:47.822 "immediate_data_pool_size": 16384, 00:03:47.822 "data_out_pool_size": 2048 00:03:47.822 } 00:03:47.822 } 00:03:47.822 ] 00:03:47.822 } 00:03:47.822 ] 00:03:47.822 } 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3180106 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3180106 ']' 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3180106 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3180106 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3180106' 00:03:47.822 killing process with pid 3180106 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3180106 00:03:47.822 18:58:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3180106 00:03:48.388 18:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3180246 00:03:48.388 18:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.388 18:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3180246 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3180246 ']' 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3180246 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3180246 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3180246' 00:03:53.646 killing process with pid 3180246 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3180246 00:03:53.646 18:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3180246 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:53.646 00:03:53.646 real 0m6.605s 00:03:53.646 user 0m6.197s 00:03:53.646 sys 0m0.688s 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:53.646 ************************************ 00:03:53.646 END TEST skip_rpc_with_json 00:03:53.646 ************************************ 00:03:53.646 18:58:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:53.646 18:58:34 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:53.646 18:58:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.646 18:58:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.646 18:58:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.646 ************************************ 00:03:53.646 START TEST skip_rpc_with_delay 00:03:53.646 ************************************ 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:53.646 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:53.903 [2024-07-15 18:58:34.118008] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:53.903 [2024-07-15 18:58:34.118113] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:53.903 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:53.903 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:53.903 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:53.903 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:53.903 00:03:53.903 real 0m0.071s 00:03:53.903 user 0m0.057s 00:03:53.903 sys 0m0.014s 00:03:53.903 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.903 18:58:34 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:53.903 ************************************ 00:03:53.903 END TEST skip_rpc_with_delay 00:03:53.903 ************************************ 00:03:53.903 18:58:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:53.903 18:58:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:53.903 18:58:34 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:53.903 18:58:34 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:53.903 18:58:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.903 18:58:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.903 18:58:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.903 ************************************ 00:03:53.903 START TEST exit_on_failed_rpc_init 00:03:53.903 ************************************ 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3180958 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3180958 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3180958 ']' 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:53.903 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:53.903 [2024-07-15 18:58:34.235602] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:03:53.903 [2024-07-15 18:58:34.235689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3180958 ] 00:03:53.903 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.903 [2024-07-15 18:58:34.292739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.160 [2024-07-15 18:58:34.402652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:54.417 18:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:54.417 [2024-07-15 18:58:34.711782] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:03:54.417 [2024-07-15 18:58:34.711886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181085 ] 00:03:54.417 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.417 [2024-07-15 18:58:34.773890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.674 [2024-07-15 18:58:34.891728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:54.674 [2024-07-15 18:58:34.891846] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:54.674 [2024-07-15 18:58:34.891873] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:54.674 [2024-07-15 18:58:34.891897] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3180958 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3180958 ']' 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3180958 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:54.674 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:54.675 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3180958 00:03:54.675 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:54.675 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:54.675 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3180958' 00:03:54.675 killing process with pid 3180958 00:03:54.675 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3180958 00:03:54.675 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3180958 00:03:55.239 00:03:55.239 real 0m1.333s 00:03:55.239 user 0m1.497s 00:03:55.239 sys 0m0.466s 00:03:55.239 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.239 18:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.239 ************************************ 00:03:55.239 END TEST exit_on_failed_rpc_init 00:03:55.239 ************************************ 00:03:55.239 18:58:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:55.239 18:58:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.239 00:03:55.239 real 0m13.741s 00:03:55.239 user 0m13.005s 00:03:55.239 sys 0m1.664s 00:03:55.239 18:58:35 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.239 18:58:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.239 ************************************ 00:03:55.239 END TEST skip_rpc 00:03:55.239 ************************************ 00:03:55.239 18:58:35 -- common/autotest_common.sh@1142 -- # return 0 00:03:55.239 18:58:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:55.239 18:58:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.239 18:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.239 18:58:35 -- common/autotest_common.sh@10 -- # set +x 00:03:55.239 ************************************ 00:03:55.239 START TEST rpc_client 00:03:55.239 ************************************ 00:03:55.239 18:58:35 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:55.239 * Looking for test storage... 00:03:55.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:55.239 18:58:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:55.239 OK 00:03:55.239 18:58:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:55.239 00:03:55.239 real 0m0.066s 00:03:55.239 user 0m0.032s 00:03:55.239 sys 0m0.038s 00:03:55.239 18:58:35 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.239 18:58:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:55.239 ************************************ 00:03:55.239 END TEST rpc_client 00:03:55.239 ************************************ 00:03:55.497 18:58:35 -- common/autotest_common.sh@1142 -- # return 0 00:03:55.497 18:58:35 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:55.497 18:58:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.497 18:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.497 18:58:35 -- common/autotest_common.sh@10 -- # set +x 00:03:55.497 ************************************ 00:03:55.497 START TEST json_config 00:03:55.497 ************************************ 00:03:55.497 18:58:35 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:55.497 18:58:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:55.497 18:58:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:55.497 18:58:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:55.497 18:58:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.497 18:58:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.497 18:58:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.497 18:58:35 json_config -- paths/export.sh@5 -- # export PATH 00:03:55.497 18:58:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@47 -- # : 0 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:55.497 18:58:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:55.497 INFO: JSON configuration test init 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:55.497 18:58:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.497 18:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.497 18:58:35 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:55.497 18:58:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.497 18:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.498 18:58:35 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:55.498 18:58:35 json_config -- json_config/common.sh@9 -- # local app=target 00:03:55.498 18:58:35 json_config -- json_config/common.sh@10 -- # shift 00:03:55.498 18:58:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:55.498 18:58:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:55.498 18:58:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:55.498 18:58:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.498 18:58:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.498 18:58:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3181329 00:03:55.498 18:58:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:55.498 18:58:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:55.498 Waiting for target to run... 00:03:55.498 18:58:35 json_config -- json_config/common.sh@25 -- # waitforlisten 3181329 /var/tmp/spdk_tgt.sock 00:03:55.498 18:58:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 3181329 ']' 00:03:55.498 18:58:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:55.498 18:58:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.498 18:58:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:55.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:55.498 18:58:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.498 18:58:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.498 [2024-07-15 18:58:35.812673] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:03:55.498 [2024-07-15 18:58:35.812755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181329 ] 00:03:55.498 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.755 [2024-07-15 18:58:36.149355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.012 [2024-07-15 18:58:36.239102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.576 18:58:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.576 18:58:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:56.576 18:58:36 json_config -- json_config/common.sh@26 -- # echo '' 00:03:56.576 00:03:56.576 18:58:36 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:56.576 18:58:36 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:56.576 18:58:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.576 18:58:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.576 18:58:36 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:56.576 18:58:36 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:56.576 18:58:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:56.576 18:58:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.576 18:58:36 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:56.576 18:58:36 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:56.576 18:58:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:59.861 18:58:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:59.861 18:58:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:59.861 18:58:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.861 18:58:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.861 18:58:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:59.861 18:58:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:59.861 18:58:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:59.861 18:58:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:59.861 18:58:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:59.861 18:58:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:59.861 18:58:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.861 18:58:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:59.861 18:58:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.861 18:58:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:59.861 18:58:40 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.861 18:58:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:00.119 MallocForNvmf0 00:04:00.119 18:58:40 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.119 18:58:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.408 MallocForNvmf1 00:04:00.408 18:58:40 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.408 18:58:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.672 [2024-07-15 18:58:41.012468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.672 18:58:41 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:00.672 18:58:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:00.930 18:58:41 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.930 18:58:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:01.189 18:58:41 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.189 18:58:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:01.447 18:58:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.447 18:58:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:01.704 [2024-07-15 18:58:42.035850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:01.704 18:58:42 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:01.704 18:58:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.704 18:58:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.704 18:58:42 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:01.704 18:58:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.704 18:58:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.704 18:58:42 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:01.704 18:58:42 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.704 18:58:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.961 MallocBdevForConfigChangeCheck 00:04:01.961 18:58:42 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:01.961 18:58:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:01.961 18:58:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.961 18:58:42 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:01.961 18:58:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.527 18:58:42 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:02.527 INFO: shutting down applications... 00:04:02.527 18:58:42 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:02.527 18:58:42 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:02.527 18:58:42 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:02.527 18:58:42 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:04.425 Calling clear_iscsi_subsystem 00:04:04.425 Calling clear_nvmf_subsystem 00:04:04.425 Calling clear_nbd_subsystem 00:04:04.425 Calling clear_ublk_subsystem 00:04:04.425 Calling clear_vhost_blk_subsystem 00:04:04.425 Calling clear_vhost_scsi_subsystem 00:04:04.425 Calling clear_bdev_subsystem 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@345 -- # break 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:04.425 18:58:44 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:04.425 18:58:44 json_config -- json_config/common.sh@31 -- # local app=target 00:04:04.425 18:58:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:04.425 18:58:44 json_config -- json_config/common.sh@35 -- # [[ -n 3181329 ]] 00:04:04.425 18:58:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3181329 00:04:04.425 18:58:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:04.425 18:58:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.425 18:58:44 json_config -- json_config/common.sh@41 -- # kill -0 3181329 00:04:04.425 18:58:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:04.994 18:58:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:04.994 18:58:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:04.994 18:58:45 json_config -- json_config/common.sh@41 -- # kill -0 3181329 00:04:04.994 18:58:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:04.994 18:58:45 json_config -- json_config/common.sh@43 -- # break 00:04:04.994 18:58:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:04.994 18:58:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:04.994 SPDK target shutdown done 00:04:04.994 18:58:45 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:04.994 INFO: relaunching applications... 00:04:04.994 18:58:45 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.994 18:58:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.994 18:58:45 json_config -- json_config/common.sh@10 -- # shift 00:04:04.994 18:58:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.994 18:58:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.994 18:58:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.994 18:58:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.994 18:58:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.994 18:58:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3182523 00:04:04.994 18:58:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.994 18:58:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.994 Waiting for target to run... 00:04:04.994 18:58:45 json_config -- json_config/common.sh@25 -- # waitforlisten 3182523 /var/tmp/spdk_tgt.sock 00:04:04.994 18:58:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 3182523 ']' 00:04:04.994 18:58:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.994 18:58:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.994 18:58:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.994 18:58:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.994 18:58:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.994 [2024-07-15 18:58:45.306111] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:04.994 [2024-07-15 18:58:45.306195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3182523 ] 00:04:04.994 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.563 [2024-07-15 18:58:45.829308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.563 [2024-07-15 18:58:45.933142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.844 [2024-07-15 18:58:48.976514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.844 [2024-07-15 18:58:49.008982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:09.410 18:58:49 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:09.410 18:58:49 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:09.410 18:58:49 json_config -- json_config/common.sh@26 -- # echo '' 00:04:09.410 00:04:09.410 18:58:49 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:09.410 18:58:49 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:09.410 INFO: Checking if target configuration is the same... 00:04:09.410 18:58:49 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.410 18:58:49 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:09.410 18:58:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.410 + '[' 2 -ne 2 ']' 00:04:09.410 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:09.410 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:09.410 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.410 +++ basename /dev/fd/62 00:04:09.410 ++ mktemp /tmp/62.XXX 00:04:09.410 + tmp_file_1=/tmp/62.Fqd 00:04:09.410 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.410 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:09.410 + tmp_file_2=/tmp/spdk_tgt_config.json.JPB 00:04:09.410 + ret=0 00:04:09.410 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.669 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:09.927 + diff -u /tmp/62.Fqd /tmp/spdk_tgt_config.json.JPB 00:04:09.927 + echo 'INFO: JSON config files are the same' 00:04:09.927 INFO: JSON config files are the same 00:04:09.927 + rm /tmp/62.Fqd /tmp/spdk_tgt_config.json.JPB 00:04:09.927 + exit 0 00:04:09.927 18:58:50 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:09.927 18:58:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:09.927 INFO: changing configuration and checking if this can be detected... 00:04:09.927 18:58:50 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:09.927 18:58:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:10.185 18:58:50 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.185 18:58:50 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:10.185 18:58:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.185 + '[' 2 -ne 2 ']' 00:04:10.185 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:10.185 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:10.185 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.185 +++ basename /dev/fd/62 00:04:10.185 ++ mktemp /tmp/62.XXX 00:04:10.185 + tmp_file_1=/tmp/62.b0i 00:04:10.185 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.185 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:10.185 + tmp_file_2=/tmp/spdk_tgt_config.json.nuu 00:04:10.185 + ret=0 00:04:10.185 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:10.443 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:10.443 + diff -u /tmp/62.b0i /tmp/spdk_tgt_config.json.nuu 00:04:10.443 + ret=1 00:04:10.443 + echo '=== Start of file: /tmp/62.b0i ===' 00:04:10.443 + cat /tmp/62.b0i 00:04:10.443 + echo '=== End of file: /tmp/62.b0i ===' 00:04:10.443 + echo '' 00:04:10.443 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nuu ===' 00:04:10.443 + cat /tmp/spdk_tgt_config.json.nuu 00:04:10.443 + echo '=== End of file: /tmp/spdk_tgt_config.json.nuu ===' 00:04:10.443 + echo '' 00:04:10.443 + rm /tmp/62.b0i /tmp/spdk_tgt_config.json.nuu 00:04:10.443 + exit 1 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:10.443 INFO: configuration change detected. 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@317 -- # [[ -n 3182523 ]] 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.443 18:58:50 json_config -- json_config/json_config.sh@323 -- # killprocess 3182523 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@948 -- # '[' -z 3182523 ']' 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@952 -- # kill -0 3182523 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@953 -- # uname 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.443 18:58:50 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3182523 00:04:10.710 18:58:50 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:10.710 18:58:50 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:10.710 18:58:50 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3182523' 00:04:10.710 killing process with pid 3182523 00:04:10.710 18:58:50 json_config -- common/autotest_common.sh@967 -- # kill 3182523 00:04:10.710 18:58:50 json_config -- common/autotest_common.sh@972 -- # wait 3182523 00:04:12.085 18:58:52 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.085 18:58:52 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:12.085 18:58:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:12.085 18:58:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.344 18:58:52 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:12.344 18:58:52 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:12.344 INFO: Success 00:04:12.344 00:04:12.344 real 0m16.831s 00:04:12.344 user 0m18.882s 00:04:12.344 sys 0m2.054s 00:04:12.344 18:58:52 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.344 18:58:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.344 ************************************ 00:04:12.344 END TEST json_config 00:04:12.344 ************************************ 00:04:12.344 18:58:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.344 18:58:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:12.344 18:58:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.344 18:58:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.344 18:58:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.344 ************************************ 00:04:12.344 START TEST json_config_extra_key 00:04:12.344 ************************************ 00:04:12.344 18:58:52 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:12.344 18:58:52 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.344 18:58:52 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.344 18:58:52 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.344 18:58:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.344 18:58:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.344 18:58:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.344 18:58:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:12.344 18:58:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:12.344 18:58:52 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:12.344 INFO: launching applications... 00:04:12.344 18:58:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3183569 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.344 Waiting for target to run... 00:04:12.344 18:58:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3183569 /var/tmp/spdk_tgt.sock 00:04:12.344 18:58:52 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3183569 ']' 00:04:12.344 18:58:52 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.345 18:58:52 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.345 18:58:52 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.345 18:58:52 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.345 18:58:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:12.345 [2024-07-15 18:58:52.679705] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:12.345 [2024-07-15 18:58:52.679787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183569 ] 00:04:12.345 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.603 [2024-07-15 18:58:53.025195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.862 [2024-07-15 18:58:53.114558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.427 18:58:53 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.427 18:58:53 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:13.427 00:04:13.427 18:58:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:13.427 INFO: shutting down applications... 00:04:13.427 18:58:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3183569 ]] 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3183569 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3183569 00:04:13.427 18:58:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:13.993 18:58:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:13.993 18:58:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.993 18:58:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3183569 00:04:13.993 18:58:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.252 18:58:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.252 18:58:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.252 18:58:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3183569 00:04:14.252 18:58:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:14.252 18:58:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:14.252 18:58:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:14.252 18:58:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:14.252 SPDK target shutdown done 00:04:14.252 18:58:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:14.252 Success 00:04:14.252 00:04:14.252 real 0m2.048s 00:04:14.252 user 0m1.562s 00:04:14.252 sys 0m0.438s 00:04:14.252 18:58:54 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.252 18:58:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.252 ************************************ 00:04:14.252 END TEST json_config_extra_key 00:04:14.252 ************************************ 00:04:14.252 18:58:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.252 18:58:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:14.252 18:58:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.252 18:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.252 18:58:54 -- common/autotest_common.sh@10 -- # set +x 00:04:14.252 ************************************ 00:04:14.252 START TEST alias_rpc 00:04:14.252 ************************************ 00:04:14.252 18:58:54 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:14.511 * Looking for test storage... 00:04:14.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:14.511 18:58:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:14.511 18:58:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3183809 00:04:14.511 18:58:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.511 18:58:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3183809 00:04:14.511 18:58:54 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3183809 ']' 00:04:14.511 18:58:54 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.511 18:58:54 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.511 18:58:54 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.511 18:58:54 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.511 18:58:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.511 [2024-07-15 18:58:54.777369] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:14.511 [2024-07-15 18:58:54.777471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183809 ] 00:04:14.511 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.511 [2024-07-15 18:58:54.839170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.769 [2024-07-15 18:58:54.945978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.027 18:58:55 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.027 18:58:55 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:15.027 18:58:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:15.285 18:58:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3183809 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3183809 ']' 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3183809 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3183809 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3183809' 00:04:15.286 killing process with pid 3183809 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@967 -- # kill 3183809 00:04:15.286 18:58:55 alias_rpc -- common/autotest_common.sh@972 -- # wait 3183809 00:04:15.544 00:04:15.544 real 0m1.303s 00:04:15.544 user 0m1.376s 00:04:15.544 sys 0m0.424s 00:04:15.544 18:58:55 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.544 18:58:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.544 ************************************ 00:04:15.544 END TEST alias_rpc 00:04:15.544 ************************************ 00:04:15.802 18:58:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:15.802 18:58:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:15.802 18:58:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:15.802 18:58:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.802 18:58:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.802 18:58:55 -- common/autotest_common.sh@10 -- # set +x 00:04:15.802 ************************************ 00:04:15.802 START TEST spdkcli_tcp 00:04:15.802 ************************************ 00:04:15.802 18:58:56 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:15.802 * Looking for test storage... 00:04:15.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:15.802 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3184069 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:15.803 18:58:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3184069 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3184069 ']' 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.803 18:58:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:15.803 [2024-07-15 18:58:56.125425] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:15.803 [2024-07-15 18:58:56.125520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184069 ] 00:04:15.803 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.803 [2024-07-15 18:58:56.190910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:16.061 [2024-07-15 18:58:56.307686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.061 [2024-07-15 18:58:56.307692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.993 18:58:57 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.993 18:58:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:16.993 18:58:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3184209 00:04:16.993 18:58:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:16.993 18:58:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:16.993 [ 00:04:16.993 "bdev_malloc_delete", 00:04:16.993 "bdev_malloc_create", 00:04:16.993 "bdev_null_resize", 00:04:16.993 "bdev_null_delete", 00:04:16.993 "bdev_null_create", 00:04:16.993 "bdev_nvme_cuse_unregister", 00:04:16.993 "bdev_nvme_cuse_register", 00:04:16.993 "bdev_opal_new_user", 00:04:16.993 "bdev_opal_set_lock_state", 00:04:16.993 "bdev_opal_delete", 00:04:16.993 "bdev_opal_get_info", 00:04:16.993 "bdev_opal_create", 00:04:16.993 "bdev_nvme_opal_revert", 00:04:16.993 "bdev_nvme_opal_init", 00:04:16.993 "bdev_nvme_send_cmd", 00:04:16.993 "bdev_nvme_get_path_iostat", 00:04:16.993 "bdev_nvme_get_mdns_discovery_info", 00:04:16.993 "bdev_nvme_stop_mdns_discovery", 00:04:16.993 "bdev_nvme_start_mdns_discovery", 00:04:16.993 "bdev_nvme_set_multipath_policy", 00:04:16.993 "bdev_nvme_set_preferred_path", 00:04:16.993 "bdev_nvme_get_io_paths", 00:04:16.993 "bdev_nvme_remove_error_injection", 00:04:16.993 "bdev_nvme_add_error_injection", 00:04:16.993 "bdev_nvme_get_discovery_info", 00:04:16.993 "bdev_nvme_stop_discovery", 00:04:16.993 "bdev_nvme_start_discovery", 00:04:16.993 "bdev_nvme_get_controller_health_info", 00:04:16.993 "bdev_nvme_disable_controller", 00:04:16.993 "bdev_nvme_enable_controller", 00:04:16.993 "bdev_nvme_reset_controller", 00:04:16.993 "bdev_nvme_get_transport_statistics", 00:04:16.993 "bdev_nvme_apply_firmware", 00:04:16.993 "bdev_nvme_detach_controller", 00:04:16.993 "bdev_nvme_get_controllers", 00:04:16.993 "bdev_nvme_attach_controller", 00:04:16.993 "bdev_nvme_set_hotplug", 00:04:16.993 "bdev_nvme_set_options", 00:04:16.993 "bdev_passthru_delete", 00:04:16.993 "bdev_passthru_create", 00:04:16.993 "bdev_lvol_set_parent_bdev", 00:04:16.993 "bdev_lvol_set_parent", 00:04:16.993 "bdev_lvol_check_shallow_copy", 00:04:16.993 "bdev_lvol_start_shallow_copy", 00:04:16.993 "bdev_lvol_grow_lvstore", 00:04:16.993 "bdev_lvol_get_lvols", 00:04:16.993 "bdev_lvol_get_lvstores", 00:04:16.993 "bdev_lvol_delete", 00:04:16.993 "bdev_lvol_set_read_only", 00:04:16.993 "bdev_lvol_resize", 00:04:16.993 "bdev_lvol_decouple_parent", 00:04:16.993 "bdev_lvol_inflate", 00:04:16.993 "bdev_lvol_rename", 00:04:16.993 "bdev_lvol_clone_bdev", 00:04:16.993 "bdev_lvol_clone", 00:04:16.993 "bdev_lvol_snapshot", 00:04:16.993 "bdev_lvol_create", 00:04:16.993 "bdev_lvol_delete_lvstore", 00:04:16.993 "bdev_lvol_rename_lvstore", 00:04:16.993 "bdev_lvol_create_lvstore", 00:04:16.993 "bdev_raid_set_options", 00:04:16.993 "bdev_raid_remove_base_bdev", 00:04:16.993 "bdev_raid_add_base_bdev", 00:04:16.993 "bdev_raid_delete", 00:04:16.993 "bdev_raid_create", 00:04:16.993 "bdev_raid_get_bdevs", 00:04:16.993 "bdev_error_inject_error", 00:04:16.993 "bdev_error_delete", 00:04:16.993 "bdev_error_create", 00:04:16.993 "bdev_split_delete", 00:04:16.993 "bdev_split_create", 00:04:16.993 "bdev_delay_delete", 00:04:16.993 "bdev_delay_create", 00:04:16.993 "bdev_delay_update_latency", 00:04:16.993 "bdev_zone_block_delete", 00:04:16.993 "bdev_zone_block_create", 00:04:16.993 "blobfs_create", 00:04:16.993 "blobfs_detect", 00:04:16.993 "blobfs_set_cache_size", 00:04:16.993 "bdev_aio_delete", 00:04:16.993 "bdev_aio_rescan", 00:04:16.993 "bdev_aio_create", 00:04:16.993 "bdev_ftl_set_property", 00:04:16.993 "bdev_ftl_get_properties", 00:04:16.993 "bdev_ftl_get_stats", 00:04:16.993 "bdev_ftl_unmap", 00:04:16.993 "bdev_ftl_unload", 00:04:16.993 "bdev_ftl_delete", 00:04:16.993 "bdev_ftl_load", 00:04:16.993 "bdev_ftl_create", 00:04:16.993 "bdev_virtio_attach_controller", 00:04:16.993 "bdev_virtio_scsi_get_devices", 00:04:16.993 "bdev_virtio_detach_controller", 00:04:16.993 "bdev_virtio_blk_set_hotplug", 00:04:16.993 "bdev_iscsi_delete", 00:04:16.993 "bdev_iscsi_create", 00:04:16.993 "bdev_iscsi_set_options", 00:04:16.993 "accel_error_inject_error", 00:04:16.993 "ioat_scan_accel_module", 00:04:16.993 "dsa_scan_accel_module", 00:04:16.993 "iaa_scan_accel_module", 00:04:16.993 "vfu_virtio_create_scsi_endpoint", 00:04:16.993 "vfu_virtio_scsi_remove_target", 00:04:16.993 "vfu_virtio_scsi_add_target", 00:04:16.993 "vfu_virtio_create_blk_endpoint", 00:04:16.993 "vfu_virtio_delete_endpoint", 00:04:16.993 "keyring_file_remove_key", 00:04:16.993 "keyring_file_add_key", 00:04:16.993 "keyring_linux_set_options", 00:04:16.993 "iscsi_get_histogram", 00:04:16.993 "iscsi_enable_histogram", 00:04:16.993 "iscsi_set_options", 00:04:16.993 "iscsi_get_auth_groups", 00:04:16.993 "iscsi_auth_group_remove_secret", 00:04:16.993 "iscsi_auth_group_add_secret", 00:04:16.993 "iscsi_delete_auth_group", 00:04:16.993 "iscsi_create_auth_group", 00:04:16.993 "iscsi_set_discovery_auth", 00:04:16.993 "iscsi_get_options", 00:04:16.993 "iscsi_target_node_request_logout", 00:04:16.993 "iscsi_target_node_set_redirect", 00:04:16.993 "iscsi_target_node_set_auth", 00:04:16.993 "iscsi_target_node_add_lun", 00:04:16.993 "iscsi_get_stats", 00:04:16.993 "iscsi_get_connections", 00:04:16.993 "iscsi_portal_group_set_auth", 00:04:16.993 "iscsi_start_portal_group", 00:04:16.993 "iscsi_delete_portal_group", 00:04:16.993 "iscsi_create_portal_group", 00:04:16.993 "iscsi_get_portal_groups", 00:04:16.993 "iscsi_delete_target_node", 00:04:16.993 "iscsi_target_node_remove_pg_ig_maps", 00:04:16.993 "iscsi_target_node_add_pg_ig_maps", 00:04:16.993 "iscsi_create_target_node", 00:04:16.993 "iscsi_get_target_nodes", 00:04:16.993 "iscsi_delete_initiator_group", 00:04:16.993 "iscsi_initiator_group_remove_initiators", 00:04:16.993 "iscsi_initiator_group_add_initiators", 00:04:16.993 "iscsi_create_initiator_group", 00:04:16.993 "iscsi_get_initiator_groups", 00:04:16.993 "nvmf_set_crdt", 00:04:16.993 "nvmf_set_config", 00:04:16.993 "nvmf_set_max_subsystems", 00:04:16.993 "nvmf_stop_mdns_prr", 00:04:16.993 "nvmf_publish_mdns_prr", 00:04:16.993 "nvmf_subsystem_get_listeners", 00:04:16.993 "nvmf_subsystem_get_qpairs", 00:04:16.993 "nvmf_subsystem_get_controllers", 00:04:16.993 "nvmf_get_stats", 00:04:16.993 "nvmf_get_transports", 00:04:16.993 "nvmf_create_transport", 00:04:16.993 "nvmf_get_targets", 00:04:16.993 "nvmf_delete_target", 00:04:16.993 "nvmf_create_target", 00:04:16.993 "nvmf_subsystem_allow_any_host", 00:04:16.993 "nvmf_subsystem_remove_host", 00:04:16.993 "nvmf_subsystem_add_host", 00:04:16.993 "nvmf_ns_remove_host", 00:04:16.993 "nvmf_ns_add_host", 00:04:16.993 "nvmf_subsystem_remove_ns", 00:04:16.993 "nvmf_subsystem_add_ns", 00:04:16.993 "nvmf_subsystem_listener_set_ana_state", 00:04:16.993 "nvmf_discovery_get_referrals", 00:04:16.993 "nvmf_discovery_remove_referral", 00:04:16.993 "nvmf_discovery_add_referral", 00:04:16.993 "nvmf_subsystem_remove_listener", 00:04:16.993 "nvmf_subsystem_add_listener", 00:04:16.993 "nvmf_delete_subsystem", 00:04:16.993 "nvmf_create_subsystem", 00:04:16.993 "nvmf_get_subsystems", 00:04:16.993 "env_dpdk_get_mem_stats", 00:04:16.993 "nbd_get_disks", 00:04:16.993 "nbd_stop_disk", 00:04:16.993 "nbd_start_disk", 00:04:16.993 "ublk_recover_disk", 00:04:16.993 "ublk_get_disks", 00:04:16.993 "ublk_stop_disk", 00:04:16.993 "ublk_start_disk", 00:04:16.993 "ublk_destroy_target", 00:04:16.993 "ublk_create_target", 00:04:16.993 "virtio_blk_create_transport", 00:04:16.993 "virtio_blk_get_transports", 00:04:16.993 "vhost_controller_set_coalescing", 00:04:16.993 "vhost_get_controllers", 00:04:16.993 "vhost_delete_controller", 00:04:16.994 "vhost_create_blk_controller", 00:04:16.994 "vhost_scsi_controller_remove_target", 00:04:16.994 "vhost_scsi_controller_add_target", 00:04:16.994 "vhost_start_scsi_controller", 00:04:16.994 "vhost_create_scsi_controller", 00:04:16.994 "thread_set_cpumask", 00:04:16.994 "framework_get_governor", 00:04:16.994 "framework_get_scheduler", 00:04:16.994 "framework_set_scheduler", 00:04:16.994 "framework_get_reactors", 00:04:16.994 "thread_get_io_channels", 00:04:16.994 "thread_get_pollers", 00:04:16.994 "thread_get_stats", 00:04:16.994 "framework_monitor_context_switch", 00:04:16.994 "spdk_kill_instance", 00:04:16.994 "log_enable_timestamps", 00:04:16.994 "log_get_flags", 00:04:16.994 "log_clear_flag", 00:04:16.994 "log_set_flag", 00:04:16.994 "log_get_level", 00:04:16.994 "log_set_level", 00:04:16.994 "log_get_print_level", 00:04:16.994 "log_set_print_level", 00:04:16.994 "framework_enable_cpumask_locks", 00:04:16.994 "framework_disable_cpumask_locks", 00:04:16.994 "framework_wait_init", 00:04:16.994 "framework_start_init", 00:04:16.994 "scsi_get_devices", 00:04:16.994 "bdev_get_histogram", 00:04:16.994 "bdev_enable_histogram", 00:04:16.994 "bdev_set_qos_limit", 00:04:16.994 "bdev_set_qd_sampling_period", 00:04:16.994 "bdev_get_bdevs", 00:04:16.994 "bdev_reset_iostat", 00:04:16.994 "bdev_get_iostat", 00:04:16.994 "bdev_examine", 00:04:16.994 "bdev_wait_for_examine", 00:04:16.994 "bdev_set_options", 00:04:16.994 "notify_get_notifications", 00:04:16.994 "notify_get_types", 00:04:16.994 "accel_get_stats", 00:04:16.994 "accel_set_options", 00:04:16.994 "accel_set_driver", 00:04:16.994 "accel_crypto_key_destroy", 00:04:16.994 "accel_crypto_keys_get", 00:04:16.994 "accel_crypto_key_create", 00:04:16.994 "accel_assign_opc", 00:04:16.994 "accel_get_module_info", 00:04:16.994 "accel_get_opc_assignments", 00:04:16.994 "vmd_rescan", 00:04:16.994 "vmd_remove_device", 00:04:16.994 "vmd_enable", 00:04:16.994 "sock_get_default_impl", 00:04:16.994 "sock_set_default_impl", 00:04:16.994 "sock_impl_set_options", 00:04:16.994 "sock_impl_get_options", 00:04:16.994 "iobuf_get_stats", 00:04:16.994 "iobuf_set_options", 00:04:16.994 "keyring_get_keys", 00:04:16.994 "framework_get_pci_devices", 00:04:16.994 "framework_get_config", 00:04:16.994 "framework_get_subsystems", 00:04:16.994 "vfu_tgt_set_base_path", 00:04:16.994 "trace_get_info", 00:04:16.994 "trace_get_tpoint_group_mask", 00:04:16.994 "trace_disable_tpoint_group", 00:04:16.994 "trace_enable_tpoint_group", 00:04:16.994 "trace_clear_tpoint_mask", 00:04:16.994 "trace_set_tpoint_mask", 00:04:16.994 "spdk_get_version", 00:04:16.994 "rpc_get_methods" 00:04:16.994 ] 00:04:16.994 18:58:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.994 18:58:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:16.994 18:58:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3184069 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3184069 ']' 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3184069 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3184069 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3184069' 00:04:16.994 killing process with pid 3184069 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3184069 00:04:16.994 18:58:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3184069 00:04:17.589 00:04:17.589 real 0m1.812s 00:04:17.589 user 0m3.458s 00:04:17.589 sys 0m0.494s 00:04:17.589 18:58:57 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.589 18:58:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.589 ************************************ 00:04:17.589 END TEST spdkcli_tcp 00:04:17.589 ************************************ 00:04:17.589 18:58:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.589 18:58:57 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:17.589 18:58:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.589 18:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.589 18:58:57 -- common/autotest_common.sh@10 -- # set +x 00:04:17.589 ************************************ 00:04:17.589 START TEST dpdk_mem_utility 00:04:17.589 ************************************ 00:04:17.589 18:58:57 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:17.589 * Looking for test storage... 00:04:17.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:17.589 18:58:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:17.589 18:58:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3184282 00:04:17.589 18:58:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.589 18:58:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3184282 00:04:17.589 18:58:57 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3184282 ']' 00:04:17.589 18:58:57 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.589 18:58:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.589 18:58:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.589 18:58:57 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.589 18:58:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:17.589 [2024-07-15 18:58:57.978900] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:17.589 [2024-07-15 18:58:57.978995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184282 ] 00:04:17.589 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.847 [2024-07-15 18:58:58.040326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.847 [2024-07-15 18:58:58.155738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.105 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.105 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:18.105 18:58:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:18.105 18:58:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:18.105 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.105 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.105 { 00:04:18.105 "filename": "/tmp/spdk_mem_dump.txt" 00:04:18.105 } 00:04:18.105 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.105 18:58:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:18.105 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:18.105 1 heaps totaling size 814.000000 MiB 00:04:18.105 size: 814.000000 MiB heap id: 0 00:04:18.105 end heaps---------- 00:04:18.105 8 mempools totaling size 598.116089 MiB 00:04:18.105 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:18.105 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:18.105 size: 84.521057 MiB name: bdev_io_3184282 00:04:18.105 size: 51.011292 MiB name: evtpool_3184282 00:04:18.105 size: 50.003479 MiB name: msgpool_3184282 00:04:18.105 size: 21.763794 MiB name: PDU_Pool 00:04:18.105 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:18.105 size: 0.026123 MiB name: Session_Pool 00:04:18.105 end mempools------- 00:04:18.105 6 memzones totaling size 4.142822 MiB 00:04:18.105 size: 1.000366 MiB name: RG_ring_0_3184282 00:04:18.105 size: 1.000366 MiB name: RG_ring_1_3184282 00:04:18.105 size: 1.000366 MiB name: RG_ring_4_3184282 00:04:18.105 size: 1.000366 MiB name: RG_ring_5_3184282 00:04:18.105 size: 0.125366 MiB name: RG_ring_2_3184282 00:04:18.105 size: 0.015991 MiB name: RG_ring_3_3184282 00:04:18.105 end memzones------- 00:04:18.105 18:58:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:18.363 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:18.363 list of free elements. size: 12.519348 MiB 00:04:18.363 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:18.363 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:18.363 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:18.363 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:18.363 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:18.363 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:18.363 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:18.363 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:18.363 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:18.363 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:18.363 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:18.363 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:18.363 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:18.363 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:18.363 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:18.363 list of standard malloc elements. size: 199.218079 MiB 00:04:18.363 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:18.363 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:18.363 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:18.363 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:18.363 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:18.363 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:18.363 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:18.363 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:18.363 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:18.363 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:18.363 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:18.363 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:18.363 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:18.363 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:18.363 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:18.363 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:18.363 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:18.363 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:18.363 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:18.363 list of memzone associated elements. size: 602.262573 MiB 00:04:18.363 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:18.363 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:18.363 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:18.363 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:18.363 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:18.363 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3184282_0 00:04:18.363 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:18.363 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3184282_0 00:04:18.363 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:18.363 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3184282_0 00:04:18.363 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:18.363 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:18.363 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:18.363 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:18.363 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:18.363 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3184282 00:04:18.363 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:18.363 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3184282 00:04:18.363 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:18.363 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3184282 00:04:18.363 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:18.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:18.363 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:18.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:18.363 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:18.363 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:18.363 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:18.363 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:18.363 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:18.363 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3184282 00:04:18.363 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:18.363 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3184282 00:04:18.363 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:18.363 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3184282 00:04:18.363 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:18.363 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3184282 00:04:18.363 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:18.363 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3184282 00:04:18.363 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:18.363 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:18.363 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:18.363 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:18.363 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:18.363 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:18.363 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:18.363 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3184282 00:04:18.363 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:18.363 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:18.363 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:18.363 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:18.363 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:18.363 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3184282 00:04:18.363 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:18.363 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:18.363 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:18.363 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3184282 00:04:18.363 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:18.363 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3184282 00:04:18.363 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:18.363 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:18.363 18:58:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:18.363 18:58:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3184282 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3184282 ']' 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3184282 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3184282 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3184282' 00:04:18.363 killing process with pid 3184282 00:04:18.363 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3184282 00:04:18.364 18:58:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3184282 00:04:18.620 00:04:18.620 real 0m1.167s 00:04:18.620 user 0m1.144s 00:04:18.620 sys 0m0.409s 00:04:18.620 18:58:59 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.620 18:58:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.620 ************************************ 00:04:18.620 END TEST dpdk_mem_utility 00:04:18.620 ************************************ 00:04:18.878 18:58:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:18.878 18:58:59 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:18.878 18:58:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.878 18:58:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.878 18:58:59 -- common/autotest_common.sh@10 -- # set +x 00:04:18.878 ************************************ 00:04:18.878 START TEST event 00:04:18.878 ************************************ 00:04:18.878 18:58:59 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:18.878 * Looking for test storage... 00:04:18.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:18.878 18:58:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:18.878 18:58:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:18.878 18:58:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.878 18:58:59 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:18.878 18:58:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.878 18:58:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.878 ************************************ 00:04:18.878 START TEST event_perf 00:04:18.878 ************************************ 00:04:18.878 18:58:59 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:18.878 Running I/O for 1 seconds...[2024-07-15 18:58:59.178683] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:18.878 [2024-07-15 18:58:59.178757] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184516 ] 00:04:18.878 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.878 [2024-07-15 18:58:59.242117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:19.135 [2024-07-15 18:58:59.355936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.135 [2024-07-15 18:58:59.355996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:19.135 [2024-07-15 18:58:59.356060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:19.135 [2024-07-15 18:58:59.356063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.067 Running I/O for 1 seconds... 00:04:20.067 lcore 0: 238922 00:04:20.067 lcore 1: 238919 00:04:20.067 lcore 2: 238920 00:04:20.067 lcore 3: 238921 00:04:20.067 done. 00:04:20.067 00:04:20.067 real 0m1.316s 00:04:20.067 user 0m4.221s 00:04:20.067 sys 0m0.090s 00:04:20.067 18:59:00 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.067 18:59:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:20.067 ************************************ 00:04:20.067 END TEST event_perf 00:04:20.067 ************************************ 00:04:20.325 18:59:00 event -- common/autotest_common.sh@1142 -- # return 0 00:04:20.325 18:59:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:20.325 18:59:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:20.325 18:59:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.325 18:59:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.325 ************************************ 00:04:20.325 START TEST event_reactor 00:04:20.325 ************************************ 00:04:20.325 18:59:00 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:20.325 [2024-07-15 18:59:00.547816] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:20.325 [2024-07-15 18:59:00.547890] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184752 ] 00:04:20.325 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.325 [2024-07-15 18:59:00.613172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.325 [2024-07-15 18:59:00.731676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.698 test_start 00:04:21.698 oneshot 00:04:21.698 tick 100 00:04:21.698 tick 100 00:04:21.698 tick 250 00:04:21.698 tick 100 00:04:21.698 tick 100 00:04:21.698 tick 100 00:04:21.698 tick 250 00:04:21.698 tick 500 00:04:21.698 tick 100 00:04:21.698 tick 100 00:04:21.698 tick 250 00:04:21.698 tick 100 00:04:21.698 tick 100 00:04:21.698 test_end 00:04:21.698 00:04:21.698 real 0m1.322s 00:04:21.698 user 0m1.228s 00:04:21.698 sys 0m0.089s 00:04:21.698 18:59:01 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.698 18:59:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:21.698 ************************************ 00:04:21.698 END TEST event_reactor 00:04:21.698 ************************************ 00:04:21.698 18:59:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:21.698 18:59:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.698 18:59:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:21.698 18:59:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.698 18:59:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.698 ************************************ 00:04:21.698 START TEST event_reactor_perf 00:04:21.698 ************************************ 00:04:21.698 18:59:01 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:21.698 [2024-07-15 18:59:01.919951] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:21.698 [2024-07-15 18:59:01.920016] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184910 ] 00:04:21.698 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.698 [2024-07-15 18:59:01.981464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.698 [2024-07-15 18:59:02.100092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.076 test_start 00:04:23.076 test_end 00:04:23.076 Performance: 362724 events per second 00:04:23.076 00:04:23.076 real 0m1.318s 00:04:23.076 user 0m1.230s 00:04:23.076 sys 0m0.083s 00:04:23.076 18:59:03 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.076 18:59:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:23.076 ************************************ 00:04:23.076 END TEST event_reactor_perf 00:04:23.076 ************************************ 00:04:23.076 18:59:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:23.076 18:59:03 event -- event/event.sh@49 -- # uname -s 00:04:23.076 18:59:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:23.076 18:59:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:23.076 18:59:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.076 18:59:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.076 18:59:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.076 ************************************ 00:04:23.076 START TEST event_scheduler 00:04:23.076 ************************************ 00:04:23.076 18:59:03 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:23.076 * Looking for test storage... 00:04:23.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:23.076 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:23.076 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3185090 00:04:23.076 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:23.076 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.076 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3185090 00:04:23.076 18:59:03 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3185090 ']' 00:04:23.076 18:59:03 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.076 18:59:03 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.076 18:59:03 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.076 18:59:03 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.076 18:59:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:23.076 [2024-07-15 18:59:03.368829] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:23.076 [2024-07-15 18:59:03.368942] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3185090 ] 00:04:23.076 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.076 [2024-07-15 18:59:03.427490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:23.335 [2024-07-15 18:59:03.537028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.335 [2024-07-15 18:59:03.537113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:23.335 [2024-07-15 18:59:03.537110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:23.335 [2024-07-15 18:59:03.537053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:23.335 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 [2024-07-15 18:59:03.573840] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:23.335 [2024-07-15 18:59:03.573890] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:23.335 [2024-07-15 18:59:03.573919] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:23.335 [2024-07-15 18:59:03.573930] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:23.335 [2024-07-15 18:59:03.573941] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 [2024-07-15 18:59:03.671161] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 ************************************ 00:04:23.335 START TEST scheduler_create_thread 00:04:23.335 ************************************ 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 2 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 3 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 4 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 5 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 6 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 7 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 8 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 9 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.335 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.593 10 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.593 18:59:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:23.851 18:59:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.851 00:04:23.851 real 0m0.587s 00:04:23.851 user 0m0.006s 00:04:23.851 sys 0m0.007s 00:04:23.851 18:59:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.108 18:59:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.109 ************************************ 00:04:24.109 END TEST scheduler_create_thread 00:04:24.109 ************************************ 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:24.109 18:59:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:24.109 18:59:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3185090 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3185090 ']' 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3185090 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3185090 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3185090' 00:04:24.109 killing process with pid 3185090 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3185090 00:04:24.109 18:59:04 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3185090 00:04:24.366 [2024-07-15 18:59:04.763227] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:24.625 00:04:24.625 real 0m1.740s 00:04:24.625 user 0m2.136s 00:04:24.625 sys 0m0.318s 00:04:24.625 18:59:05 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.625 18:59:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.625 ************************************ 00:04:24.625 END TEST event_scheduler 00:04:24.625 ************************************ 00:04:24.625 18:59:05 event -- common/autotest_common.sh@1142 -- # return 0 00:04:24.625 18:59:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:24.625 18:59:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:24.625 18:59:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.625 18:59:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.625 18:59:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.883 ************************************ 00:04:24.883 START TEST app_repeat 00:04:24.883 ************************************ 00:04:24.883 18:59:05 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3185404 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3185404' 00:04:24.883 Process app_repeat pid: 3185404 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:24.883 spdk_app_start Round 0 00:04:24.883 18:59:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3185404 /var/tmp/spdk-nbd.sock 00:04:24.883 18:59:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3185404 ']' 00:04:24.883 18:59:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.883 18:59:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.883 18:59:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.883 18:59:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.883 18:59:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.883 [2024-07-15 18:59:05.095222] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:24.883 [2024-07-15 18:59:05.095285] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3185404 ] 00:04:24.883 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.883 [2024-07-15 18:59:05.162007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.883 [2024-07-15 18:59:05.277413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.883 [2024-07-15 18:59:05.277420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.140 18:59:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.140 18:59:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:25.140 18:59:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.398 Malloc0 00:04:25.398 18:59:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.655 Malloc1 00:04:25.655 18:59:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.655 18:59:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.912 /dev/nbd0 00:04:25.912 18:59:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.912 18:59:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.912 18:59:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.913 18:59:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.913 1+0 records in 00:04:25.913 1+0 records out 00:04:25.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230599 s, 17.8 MB/s 00:04:25.913 18:59:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.913 18:59:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:25.913 18:59:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.913 18:59:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.913 18:59:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:25.913 18:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.913 18:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.913 18:59:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.170 /dev/nbd1 00:04:26.170 18:59:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.170 18:59:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.170 1+0 records in 00:04:26.170 1+0 records out 00:04:26.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213579 s, 19.2 MB/s 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:26.170 18:59:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:26.170 18:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.170 18:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.170 18:59:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.170 18:59:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.170 18:59:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.428 18:59:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.428 { 00:04:26.428 "nbd_device": "/dev/nbd0", 00:04:26.428 "bdev_name": "Malloc0" 00:04:26.428 }, 00:04:26.428 { 00:04:26.428 "nbd_device": "/dev/nbd1", 00:04:26.428 "bdev_name": "Malloc1" 00:04:26.429 } 00:04:26.429 ]' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.429 { 00:04:26.429 "nbd_device": "/dev/nbd0", 00:04:26.429 "bdev_name": "Malloc0" 00:04:26.429 }, 00:04:26.429 { 00:04:26.429 "nbd_device": "/dev/nbd1", 00:04:26.429 "bdev_name": "Malloc1" 00:04:26.429 } 00:04:26.429 ]' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.429 /dev/nbd1' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.429 /dev/nbd1' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.429 256+0 records in 00:04:26.429 256+0 records out 00:04:26.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500777 s, 209 MB/s 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.429 256+0 records in 00:04:26.429 256+0 records out 00:04:26.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236826 s, 44.3 MB/s 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.429 256+0 records in 00:04:26.429 256+0 records out 00:04:26.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253303 s, 41.4 MB/s 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.429 18:59:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.687 18:59:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.944 18:59:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.944 18:59:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.944 18:59:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.944 18:59:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.944 18:59:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.944 18:59:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.201 18:59:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.482 18:59:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.482 18:59:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.739 18:59:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.997 [2024-07-15 18:59:08.205951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.997 [2024-07-15 18:59:08.319891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.997 [2024-07-15 18:59:08.319891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.997 [2024-07-15 18:59:08.378403] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.997 [2024-07-15 18:59:08.378486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.520 18:59:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:30.520 18:59:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:30.520 spdk_app_start Round 1 00:04:30.520 18:59:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3185404 /var/tmp/spdk-nbd.sock 00:04:30.520 18:59:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3185404 ']' 00:04:30.520 18:59:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.520 18:59:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.520 18:59:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.520 18:59:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.520 18:59:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.777 18:59:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.777 18:59:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:30.777 18:59:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.036 Malloc0 00:04:31.036 18:59:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.294 Malloc1 00:04:31.294 18:59:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.294 18:59:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.552 /dev/nbd0 00:04:31.552 18:59:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.552 18:59:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.552 1+0 records in 00:04:31.552 1+0 records out 00:04:31.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208102 s, 19.7 MB/s 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:31.552 18:59:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:31.552 18:59:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.552 18:59:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.552 18:59:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.810 /dev/nbd1 00:04:31.810 18:59:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.810 18:59:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:31.810 18:59:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.068 1+0 records in 00:04:32.068 1+0 records out 00:04:32.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195518 s, 20.9 MB/s 00:04:32.068 18:59:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.068 18:59:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:32.068 18:59:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.068 18:59:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:32.068 18:59:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.068 { 00:04:32.068 "nbd_device": "/dev/nbd0", 00:04:32.068 "bdev_name": "Malloc0" 00:04:32.068 }, 00:04:32.068 { 00:04:32.068 "nbd_device": "/dev/nbd1", 00:04:32.068 "bdev_name": "Malloc1" 00:04:32.068 } 00:04:32.068 ]' 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.068 { 00:04:32.068 "nbd_device": "/dev/nbd0", 00:04:32.068 "bdev_name": "Malloc0" 00:04:32.068 }, 00:04:32.068 { 00:04:32.068 "nbd_device": "/dev/nbd1", 00:04:32.068 "bdev_name": "Malloc1" 00:04:32.068 } 00:04:32.068 ]' 00:04:32.068 18:59:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:32.325 /dev/nbd1' 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:32.325 /dev/nbd1' 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.325 256+0 records in 00:04:32.325 256+0 records out 00:04:32.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505646 s, 207 MB/s 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.325 256+0 records in 00:04:32.325 256+0 records out 00:04:32.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246784 s, 42.5 MB/s 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.325 256+0 records in 00:04:32.325 256+0 records out 00:04:32.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244472 s, 42.9 MB/s 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.325 18:59:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.326 18:59:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.583 18:59:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.840 18:59:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.840 18:59:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.840 18:59:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.840 18:59:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.840 18:59:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.841 18:59:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.841 18:59:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.841 18:59:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.841 18:59:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.841 18:59:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.841 18:59:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:33.097 18:59:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:33.097 18:59:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.355 18:59:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:33.612 [2024-07-15 18:59:13.965059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.870 [2024-07-15 18:59:14.080628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.870 [2024-07-15 18:59:14.080633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.870 [2024-07-15 18:59:14.142838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.871 [2024-07-15 18:59:14.142943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.404 18:59:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.404 18:59:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:36.404 spdk_app_start Round 2 00:04:36.404 18:59:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3185404 /var/tmp/spdk-nbd.sock 00:04:36.404 18:59:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3185404 ']' 00:04:36.404 18:59:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.404 18:59:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.404 18:59:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.404 18:59:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.404 18:59:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.660 18:59:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.660 18:59:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:36.660 18:59:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.917 Malloc0 00:04:36.917 18:59:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.175 Malloc1 00:04:37.175 18:59:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.175 18:59:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.433 /dev/nbd0 00:04:37.433 18:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.433 18:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.433 1+0 records in 00:04:37.433 1+0 records out 00:04:37.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00013375 s, 30.6 MB/s 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:37.433 18:59:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:37.433 18:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.433 18:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.434 18:59:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.691 /dev/nbd1 00:04:37.691 18:59:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.691 18:59:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.691 1+0 records in 00:04:37.691 1+0 records out 00:04:37.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209199 s, 19.6 MB/s 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:37.691 18:59:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:37.691 18:59:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.691 18:59:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.691 18:59:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.691 18:59:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.691 18:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:37.948 { 00:04:37.948 "nbd_device": "/dev/nbd0", 00:04:37.948 "bdev_name": "Malloc0" 00:04:37.948 }, 00:04:37.948 { 00:04:37.948 "nbd_device": "/dev/nbd1", 00:04:37.948 "bdev_name": "Malloc1" 00:04:37.948 } 00:04:37.948 ]' 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:37.948 { 00:04:37.948 "nbd_device": "/dev/nbd0", 00:04:37.948 "bdev_name": "Malloc0" 00:04:37.948 }, 00:04:37.948 { 00:04:37.948 "nbd_device": "/dev/nbd1", 00:04:37.948 "bdev_name": "Malloc1" 00:04:37.948 } 00:04:37.948 ]' 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:37.948 /dev/nbd1' 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:37.948 /dev/nbd1' 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:37.948 256+0 records in 00:04:37.948 256+0 records out 00:04:37.948 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518661 s, 202 MB/s 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.948 18:59:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.948 256+0 records in 00:04:37.948 256+0 records out 00:04:37.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212747 s, 49.3 MB/s 00:04:37.949 18:59:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.949 18:59:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.206 256+0 records in 00:04:38.206 256+0 records out 00:04:38.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246437 s, 42.5 MB/s 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.206 18:59:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.478 18:59:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.737 18:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.995 18:59:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.995 18:59:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.254 18:59:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:39.514 [2024-07-15 18:59:19.789423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.514 [2024-07-15 18:59:19.912076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.514 [2024-07-15 18:59:19.912076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.774 [2024-07-15 18:59:19.974874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.774 [2024-07-15 18:59:19.974973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.310 18:59:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3185404 /var/tmp/spdk-nbd.sock 00:04:42.310 18:59:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3185404 ']' 00:04:42.310 18:59:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.310 18:59:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.310 18:59:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.310 18:59:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.310 18:59:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:42.570 18:59:22 event.app_repeat -- event/event.sh@39 -- # killprocess 3185404 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3185404 ']' 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3185404 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3185404 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3185404' 00:04:42.570 killing process with pid 3185404 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3185404 00:04:42.570 18:59:22 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3185404 00:04:42.829 spdk_app_start is called in Round 0. 00:04:42.829 Shutdown signal received, stop current app iteration 00:04:42.829 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:04:42.829 spdk_app_start is called in Round 1. 00:04:42.829 Shutdown signal received, stop current app iteration 00:04:42.829 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:04:42.829 spdk_app_start is called in Round 2. 00:04:42.829 Shutdown signal received, stop current app iteration 00:04:42.829 Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 reinitialization... 00:04:42.829 spdk_app_start is called in Round 3. 00:04:42.829 Shutdown signal received, stop current app iteration 00:04:42.829 18:59:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:42.829 18:59:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:42.829 00:04:42.829 real 0m17.988s 00:04:42.829 user 0m38.873s 00:04:42.829 sys 0m3.209s 00:04:42.829 18:59:23 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.829 18:59:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.829 ************************************ 00:04:42.829 END TEST app_repeat 00:04:42.829 ************************************ 00:04:42.829 18:59:23 event -- common/autotest_common.sh@1142 -- # return 0 00:04:42.829 18:59:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:42.829 18:59:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.829 18:59:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.829 18:59:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.829 18:59:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.829 ************************************ 00:04:42.829 START TEST cpu_locks 00:04:42.829 ************************************ 00:04:42.829 18:59:23 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.829 * Looking for test storage... 00:04:42.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.829 18:59:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:42.829 18:59:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:42.829 18:59:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:42.829 18:59:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:42.829 18:59:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.829 18:59:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.829 18:59:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.829 ************************************ 00:04:42.829 START TEST default_locks 00:04:42.829 ************************************ 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3187755 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3187755 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3187755 ']' 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.829 18:59:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.829 [2024-07-15 18:59:23.239206] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:42.829 [2024-07-15 18:59:23.239302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187755 ] 00:04:43.092 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.092 [2024-07-15 18:59:23.302369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.092 [2024-07-15 18:59:23.419553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.027 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.027 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:44.027 18:59:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3187755 00:04:44.027 18:59:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3187755 00:04:44.027 18:59:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.286 lslocks: write error 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3187755 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3187755 ']' 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3187755 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3187755 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3187755' 00:04:44.286 killing process with pid 3187755 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3187755 00:04:44.286 18:59:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3187755 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3187755 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3187755 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3187755 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3187755 ']' 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3187755) - No such process 00:04:44.854 ERROR: process (pid: 3187755) is no longer running 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.854 00:04:44.854 real 0m1.950s 00:04:44.854 user 0m2.065s 00:04:44.854 sys 0m0.612s 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.854 18:59:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.854 ************************************ 00:04:44.854 END TEST default_locks 00:04:44.854 ************************************ 00:04:44.854 18:59:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:44.854 18:59:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:44.854 18:59:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.854 18:59:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.854 18:59:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.854 ************************************ 00:04:44.854 START TEST default_locks_via_rpc 00:04:44.854 ************************************ 00:04:44.854 18:59:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:44.854 18:59:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3188051 00:04:44.854 18:59:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.854 18:59:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3188051 00:04:44.854 18:59:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3188051 ']' 00:04:44.855 18:59:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.855 18:59:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.855 18:59:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.855 18:59:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.855 18:59:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.855 [2024-07-15 18:59:25.242086] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:44.855 [2024-07-15 18:59:25.242184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188051 ] 00:04:44.855 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.114 [2024-07-15 18:59:25.304383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.114 [2024-07-15 18:59:25.418321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3188051 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3188051 00:04:46.051 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3188051 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3188051 ']' 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3188051 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3188051 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.342 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3188051' 00:04:46.342 killing process with pid 3188051 00:04:46.343 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3188051 00:04:46.343 18:59:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3188051 00:04:46.912 00:04:46.912 real 0m1.987s 00:04:46.912 user 0m2.095s 00:04:46.912 sys 0m0.615s 00:04:46.912 18:59:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.912 18:59:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.912 ************************************ 00:04:46.912 END TEST default_locks_via_rpc 00:04:46.912 ************************************ 00:04:46.912 18:59:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:46.912 18:59:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:46.912 18:59:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.912 18:59:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.912 18:59:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.912 ************************************ 00:04:46.912 START TEST non_locking_app_on_locked_coremask 00:04:46.912 ************************************ 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3188343 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3188343 /var/tmp/spdk.sock 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3188343 ']' 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.912 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.912 [2024-07-15 18:59:27.282367] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:46.912 [2024-07-15 18:59:27.282460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188343 ] 00:04:46.912 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.912 [2024-07-15 18:59:27.340005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.172 [2024-07-15 18:59:27.453269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3188351 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3188351 /var/tmp/spdk2.sock 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3188351 ']' 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.431 18:59:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.431 [2024-07-15 18:59:27.773786] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:47.431 [2024-07-15 18:59:27.773903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188351 ] 00:04:47.431 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.690 [2024-07-15 18:59:27.867576] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.690 [2024-07-15 18:59:27.867619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.690 [2024-07-15 18:59:28.101928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.627 18:59:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.627 18:59:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:48.627 18:59:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3188343 00:04:48.627 18:59:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.627 18:59:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3188343 00:04:48.886 lslocks: write error 00:04:48.886 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3188343 00:04:48.886 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3188343 ']' 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3188343 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3188343 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3188343' 00:04:48.887 killing process with pid 3188343 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3188343 00:04:48.887 18:59:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3188343 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3188351 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3188351 ']' 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3188351 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3188351 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3188351' 00:04:49.826 killing process with pid 3188351 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3188351 00:04:49.826 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3188351 00:04:50.394 00:04:50.394 real 0m3.411s 00:04:50.394 user 0m3.536s 00:04:50.394 sys 0m1.067s 00:04:50.394 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.394 18:59:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.394 ************************************ 00:04:50.394 END TEST non_locking_app_on_locked_coremask 00:04:50.394 ************************************ 00:04:50.394 18:59:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:50.394 18:59:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:50.394 18:59:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.394 18:59:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.394 18:59:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.394 ************************************ 00:04:50.394 START TEST locking_app_on_unlocked_coremask 00:04:50.394 ************************************ 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3188781 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3188781 /var/tmp/spdk.sock 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3188781 ']' 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.394 18:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.394 [2024-07-15 18:59:30.740074] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:50.395 [2024-07-15 18:59:30.740170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188781 ] 00:04:50.395 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.395 [2024-07-15 18:59:30.801215] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.395 [2024-07-15 18:59:30.801260] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.654 [2024-07-15 18:59:30.915974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3188799 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3188799 /var/tmp/spdk2.sock 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3188799 ']' 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.592 18:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:51.592 [2024-07-15 18:59:31.710566] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:51.593 [2024-07-15 18:59:31.710669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188799 ] 00:04:51.593 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.593 [2024-07-15 18:59:31.808463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.852 [2024-07-15 18:59:32.042677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.419 18:59:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.419 18:59:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:52.419 18:59:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3188799 00:04:52.419 18:59:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3188799 00:04:52.419 18:59:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.986 lslocks: write error 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3188781 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3188781 ']' 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3188781 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3188781 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3188781' 00:04:52.986 killing process with pid 3188781 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3188781 00:04:52.986 18:59:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3188781 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3188799 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3188799 ']' 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3188799 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3188799 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3188799' 00:04:53.926 killing process with pid 3188799 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3188799 00:04:53.926 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3188799 00:04:54.185 00:04:54.185 real 0m3.915s 00:04:54.185 user 0m4.237s 00:04:54.185 sys 0m1.097s 00:04:54.185 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.185 18:59:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.185 ************************************ 00:04:54.185 END TEST locking_app_on_unlocked_coremask 00:04:54.185 ************************************ 00:04:54.445 18:59:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:54.445 18:59:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:54.445 18:59:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.445 18:59:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.445 18:59:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:54.445 ************************************ 00:04:54.445 START TEST locking_app_on_locked_coremask 00:04:54.445 ************************************ 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3189226 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3189226 /var/tmp/spdk.sock 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3189226 ']' 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.445 18:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.445 [2024-07-15 18:59:34.701520] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:54.445 [2024-07-15 18:59:34.701603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189226 ] 00:04:54.445 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.445 [2024-07-15 18:59:34.758710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.445 [2024-07-15 18:59:34.868480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3189304 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3189304 /var/tmp/spdk2.sock 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3189304 /var/tmp/spdk2.sock 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3189304 /var/tmp/spdk2.sock 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3189304 ']' 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.705 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.965 [2024-07-15 18:59:35.175979] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:54.965 [2024-07-15 18:59:35.176063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189304 ] 00:04:54.965 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.965 [2024-07-15 18:59:35.267046] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3189226 has claimed it. 00:04:54.965 [2024-07-15 18:59:35.267105] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3189304) - No such process 00:04:55.534 ERROR: process (pid: 3189304) is no longer running 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3189226 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3189226 00:04:55.534 18:59:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.792 lslocks: write error 00:04:55.792 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3189226 00:04:55.792 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3189226 ']' 00:04:55.792 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3189226 00:04:55.792 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:55.792 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.792 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3189226 00:04:56.051 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.051 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.051 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3189226' 00:04:56.051 killing process with pid 3189226 00:04:56.051 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3189226 00:04:56.051 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3189226 00:04:56.309 00:04:56.309 real 0m2.026s 00:04:56.309 user 0m2.164s 00:04:56.309 sys 0m0.662s 00:04:56.309 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.309 18:59:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.309 ************************************ 00:04:56.309 END TEST locking_app_on_locked_coremask 00:04:56.309 ************************************ 00:04:56.309 18:59:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:56.309 18:59:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:56.309 18:59:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.309 18:59:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.309 18:59:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.309 ************************************ 00:04:56.309 START TEST locking_overlapped_coremask 00:04:56.309 ************************************ 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3189522 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3189522 /var/tmp/spdk.sock 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3189522 ']' 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.309 18:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.567 [2024-07-15 18:59:36.780469] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:56.567 [2024-07-15 18:59:36.780572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189522 ] 00:04:56.567 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.567 [2024-07-15 18:59:36.843068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.567 [2024-07-15 18:59:36.960197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.567 [2024-07-15 18:59:36.960266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.567 [2024-07-15 18:59:36.960270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3189660 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3189660 /var/tmp/spdk2.sock 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3189660 /var/tmp/spdk2.sock 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3189660 /var/tmp/spdk2.sock 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3189660 ']' 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.502 18:59:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.502 [2024-07-15 18:59:37.759867] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:57.503 [2024-07-15 18:59:37.759978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189660 ] 00:04:57.503 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.503 [2024-07-15 18:59:37.846231] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3189522 has claimed it. 00:04:57.503 [2024-07-15 18:59:37.846294] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3189660) - No such process 00:04:58.099 ERROR: process (pid: 3189660) is no longer running 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3189522 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3189522 ']' 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3189522 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3189522 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3189522' 00:04:58.099 killing process with pid 3189522 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3189522 00:04:58.099 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3189522 00:04:58.678 00:04:58.678 real 0m2.194s 00:04:58.678 user 0m6.106s 00:04:58.678 sys 0m0.505s 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 ************************************ 00:04:58.678 END TEST locking_overlapped_coremask 00:04:58.678 ************************************ 00:04:58.678 18:59:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:58.678 18:59:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.678 18:59:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.678 18:59:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.678 18:59:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 ************************************ 00:04:58.678 START TEST locking_overlapped_coremask_via_rpc 00:04:58.678 ************************************ 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3189824 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3189824 /var/tmp/spdk.sock 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3189824 ']' 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.678 18:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.678 [2024-07-15 18:59:39.024615] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:58.678 [2024-07-15 18:59:39.024716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189824 ] 00:04:58.678 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.678 [2024-07-15 18:59:39.086200] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.678 [2024-07-15 18:59:39.086250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.945 [2024-07-15 18:59:39.201638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.945 [2024-07-15 18:59:39.201703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.945 [2024-07-15 18:59:39.201705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3189962 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3189962 /var/tmp/spdk2.sock 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3189962 ']' 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.883 18:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.883 [2024-07-15 18:59:39.997164] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:04:59.883 [2024-07-15 18:59:39.997274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189962 ] 00:04:59.883 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.883 [2024-07-15 18:59:40.090088] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.883 [2024-07-15 18:59:40.090138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.146 [2024-07-15 18:59:40.316912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.146 [2024-07-15 18:59:40.316943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:00.146 [2024-07-15 18:59:40.316945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.751 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.751 [2024-07-15 18:59:40.961988] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3189824 has claimed it. 00:05:00.751 request: 00:05:00.751 { 00:05:00.751 "method": "framework_enable_cpumask_locks", 00:05:00.752 "req_id": 1 00:05:00.752 } 00:05:00.752 Got JSON-RPC error response 00:05:00.752 response: 00:05:00.752 { 00:05:00.752 "code": -32603, 00:05:00.752 "message": "Failed to claim CPU core: 2" 00:05:00.752 } 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3189824 /var/tmp/spdk.sock 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3189824 ']' 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.752 18:59:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3189962 /var/tmp/spdk2.sock 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3189962 ']' 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.009 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.269 00:05:01.269 real 0m2.490s 00:05:01.269 user 0m1.185s 00:05:01.269 sys 0m0.234s 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.269 18:59:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.269 ************************************ 00:05:01.269 END TEST locking_overlapped_coremask_via_rpc 00:05:01.269 ************************************ 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:01.269 18:59:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:01.269 18:59:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3189824 ]] 00:05:01.269 18:59:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3189824 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3189824 ']' 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3189824 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3189824 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3189824' 00:05:01.269 killing process with pid 3189824 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3189824 00:05:01.269 18:59:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3189824 00:05:01.528 18:59:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3189962 ]] 00:05:01.528 18:59:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3189962 00:05:01.528 18:59:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3189962 ']' 00:05:01.528 18:59:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3189962 00:05:01.528 18:59:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:01.788 18:59:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.788 18:59:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3189962 00:05:01.788 18:59:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:01.788 18:59:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:01.788 18:59:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3189962' 00:05:01.788 killing process with pid 3189962 00:05:01.788 18:59:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3189962 00:05:01.788 18:59:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3189962 00:05:02.048 18:59:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.048 18:59:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:02.048 18:59:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3189824 ]] 00:05:02.048 18:59:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3189824 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3189824 ']' 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3189824 00:05:02.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3189824) - No such process 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3189824 is not found' 00:05:02.048 Process with pid 3189824 is not found 00:05:02.048 18:59:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3189962 ]] 00:05:02.048 18:59:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3189962 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3189962 ']' 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3189962 00:05:02.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3189962) - No such process 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3189962 is not found' 00:05:02.048 Process with pid 3189962 is not found 00:05:02.048 18:59:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.048 00:05:02.048 real 0m19.324s 00:05:02.048 user 0m33.788s 00:05:02.048 sys 0m5.685s 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.048 18:59:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.048 ************************************ 00:05:02.048 END TEST cpu_locks 00:05:02.048 ************************************ 00:05:02.048 18:59:42 event -- common/autotest_common.sh@1142 -- # return 0 00:05:02.048 00:05:02.048 real 0m43.362s 00:05:02.048 user 1m21.618s 00:05:02.048 sys 0m9.706s 00:05:02.048 18:59:42 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.048 18:59:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.048 ************************************ 00:05:02.048 END TEST event 00:05:02.048 ************************************ 00:05:02.048 18:59:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:02.048 18:59:42 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.048 18:59:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.048 18:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.306 18:59:42 -- common/autotest_common.sh@10 -- # set +x 00:05:02.306 ************************************ 00:05:02.306 START TEST thread 00:05:02.306 ************************************ 00:05:02.306 18:59:42 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.306 * Looking for test storage... 00:05:02.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:02.306 18:59:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.306 18:59:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:02.306 18:59:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.306 18:59:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.306 ************************************ 00:05:02.306 START TEST thread_poller_perf 00:05:02.306 ************************************ 00:05:02.306 18:59:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.306 [2024-07-15 18:59:42.597739] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:02.306 [2024-07-15 18:59:42.597805] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190334 ] 00:05:02.306 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.306 [2024-07-15 18:59:42.658967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.565 [2024-07-15 18:59:42.777389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.565 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:03.503 ====================================== 00:05:03.503 busy:2710903388 (cyc) 00:05:03.503 total_run_count: 292000 00:05:03.503 tsc_hz: 2700000000 (cyc) 00:05:03.503 ====================================== 00:05:03.503 poller_cost: 9283 (cyc), 3438 (nsec) 00:05:03.503 00:05:03.503 real 0m1.322s 00:05:03.503 user 0m1.238s 00:05:03.503 sys 0m0.078s 00:05:03.503 18:59:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.503 18:59:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.503 ************************************ 00:05:03.503 END TEST thread_poller_perf 00:05:03.503 ************************************ 00:05:03.503 18:59:43 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:03.503 18:59:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.503 18:59:43 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:03.503 18:59:43 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.503 18:59:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.760 ************************************ 00:05:03.760 START TEST thread_poller_perf 00:05:03.760 ************************************ 00:05:03.760 18:59:43 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.760 [2024-07-15 18:59:43.969605] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:03.760 [2024-07-15 18:59:43.969673] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190491 ] 00:05:03.760 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.760 [2024-07-15 18:59:44.032849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.760 [2024-07-15 18:59:44.159504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.760 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:05.137 ====================================== 00:05:05.137 busy:2703037070 (cyc) 00:05:05.137 total_run_count: 3784000 00:05:05.137 tsc_hz: 2700000000 (cyc) 00:05:05.137 ====================================== 00:05:05.137 poller_cost: 714 (cyc), 264 (nsec) 00:05:05.137 00:05:05.137 real 0m1.325s 00:05:05.137 user 0m1.232s 00:05:05.137 sys 0m0.087s 00:05:05.137 18:59:45 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.137 18:59:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.137 ************************************ 00:05:05.137 END TEST thread_poller_perf 00:05:05.137 ************************************ 00:05:05.137 18:59:45 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:05.137 18:59:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:05.137 00:05:05.137 real 0m2.800s 00:05:05.137 user 0m2.535s 00:05:05.137 sys 0m0.264s 00:05:05.137 18:59:45 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.137 18:59:45 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.137 ************************************ 00:05:05.137 END TEST thread 00:05:05.137 ************************************ 00:05:05.137 18:59:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.137 18:59:45 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.137 18:59:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.137 18:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.137 18:59:45 -- common/autotest_common.sh@10 -- # set +x 00:05:05.137 ************************************ 00:05:05.137 START TEST accel 00:05:05.137 ************************************ 00:05:05.137 18:59:45 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.137 * Looking for test storage... 00:05:05.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:05.137 18:59:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:05.137 18:59:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:05.137 18:59:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.137 18:59:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3190809 00:05:05.137 18:59:45 accel -- accel/accel.sh@63 -- # waitforlisten 3190809 00:05:05.137 18:59:45 accel -- common/autotest_common.sh@829 -- # '[' -z 3190809 ']' 00:05:05.137 18:59:45 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.137 18:59:45 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:05.137 18:59:45 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.137 18:59:45 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.137 18:59:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:05.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.137 18:59:45 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.137 18:59:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.137 18:59:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.137 18:59:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.137 18:59:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.137 18:59:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.137 18:59:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.137 18:59:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:05.137 18:59:45 accel -- accel/accel.sh@41 -- # jq -r . 00:05:05.137 [2024-07-15 18:59:45.446986] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:05.137 [2024-07-15 18:59:45.447069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190809 ] 00:05:05.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.137 [2024-07-15 18:59:45.507456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.396 [2024-07-15 18:59:45.623112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.963 18:59:46 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.963 18:59:46 accel -- common/autotest_common.sh@862 -- # return 0 00:05:05.963 18:59:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:05.963 18:59:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:05.963 18:59:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:05.963 18:59:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:05.963 18:59:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:05.963 18:59:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:05.963 18:59:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:05.963 18:59:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.963 18:59:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:05.963 18:59:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.222 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.222 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.222 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.223 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.223 18:59:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.223 18:59:46 accel -- accel/accel.sh@72 -- # IFS== 00:05:06.223 18:59:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:06.223 18:59:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:06.223 18:59:46 accel -- accel/accel.sh@75 -- # killprocess 3190809 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@948 -- # '[' -z 3190809 ']' 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@952 -- # kill -0 3190809 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@953 -- # uname 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3190809 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3190809' 00:05:06.223 killing process with pid 3190809 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@967 -- # kill 3190809 00:05:06.223 18:59:46 accel -- common/autotest_common.sh@972 -- # wait 3190809 00:05:06.481 18:59:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:06.481 18:59:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:06.481 18:59:46 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:06.481 18:59:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.481 18:59:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.739 18:59:46 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:06.739 18:59:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:06.739 18:59:46 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.739 18:59:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:06.739 18:59:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.739 18:59:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:06.739 18:59:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:06.739 18:59:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.739 18:59:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.739 ************************************ 00:05:06.739 START TEST accel_missing_filename 00:05:06.739 ************************************ 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.739 18:59:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:06.739 18:59:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:06.739 18:59:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:06.739 18:59:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.740 18:59:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.740 18:59:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.740 18:59:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.740 18:59:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.740 18:59:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:06.740 18:59:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:06.740 [2024-07-15 18:59:47.001840] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:06.740 [2024-07-15 18:59:47.001938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190980 ] 00:05:06.740 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.740 [2024-07-15 18:59:47.065691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.999 [2024-07-15 18:59:47.184314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.999 [2024-07-15 18:59:47.245797] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.999 [2024-07-15 18:59:47.329944] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:07.259 A filename is required. 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.259 00:05:07.259 real 0m0.470s 00:05:07.259 user 0m0.362s 00:05:07.259 sys 0m0.141s 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.259 18:59:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:07.259 ************************************ 00:05:07.259 END TEST accel_missing_filename 00:05:07.259 ************************************ 00:05:07.259 18:59:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.259 18:59:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.259 18:59:47 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:07.259 18:59:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.259 18:59:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.259 ************************************ 00:05:07.259 START TEST accel_compress_verify 00:05:07.259 ************************************ 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.259 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:07.259 18:59:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:07.259 [2024-07-15 18:59:47.522752] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:07.259 [2024-07-15 18:59:47.522817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191052 ] 00:05:07.259 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.259 [2024-07-15 18:59:47.586672] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.519 [2024-07-15 18:59:47.713488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.519 [2024-07-15 18:59:47.773150] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.519 [2024-07-15 18:59:47.857129] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:07.778 00:05:07.778 Compression does not support the verify option, aborting. 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.778 00:05:07.778 real 0m0.478s 00:05:07.778 user 0m0.368s 00:05:07.778 sys 0m0.145s 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.778 18:59:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:07.778 ************************************ 00:05:07.778 END TEST accel_compress_verify 00:05:07.778 ************************************ 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.778 18:59:48 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.778 ************************************ 00:05:07.778 START TEST accel_wrong_workload 00:05:07.778 ************************************ 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:07.778 18:59:48 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:07.778 Unsupported workload type: foobar 00:05:07.778 [2024-07-15 18:59:48.045052] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:07.778 accel_perf options: 00:05:07.778 [-h help message] 00:05:07.778 [-q queue depth per core] 00:05:07.778 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.778 [-T number of threads per core 00:05:07.778 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.778 [-t time in seconds] 00:05:07.778 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.778 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:07.778 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.778 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.778 [-S for crc32c workload, use this seed value (default 0) 00:05:07.778 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.778 [-f for fill workload, use this BYTE value (default 255) 00:05:07.778 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.778 [-y verify result if this switch is on] 00:05:07.778 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.778 Can be used to spread operations across a wider range of memory. 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.778 00:05:07.778 real 0m0.022s 00:05:07.778 user 0m0.011s 00:05:07.778 sys 0m0.011s 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.778 18:59:48 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:07.778 ************************************ 00:05:07.778 END TEST accel_wrong_workload 00:05:07.778 ************************************ 00:05:07.778 Error: writing output failed: Broken pipe 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.778 18:59:48 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.778 ************************************ 00:05:07.778 START TEST accel_negative_buffers 00:05:07.778 ************************************ 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:07.778 18:59:48 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:07.778 -x option must be non-negative. 00:05:07.778 [2024-07-15 18:59:48.112396] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:07.778 accel_perf options: 00:05:07.778 [-h help message] 00:05:07.778 [-q queue depth per core] 00:05:07.778 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.778 [-T number of threads per core 00:05:07.778 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.778 [-t time in seconds] 00:05:07.778 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.778 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:07.778 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.778 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.778 [-S for crc32c workload, use this seed value (default 0) 00:05:07.778 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.778 [-f for fill workload, use this BYTE value (default 255) 00:05:07.778 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.778 [-y verify result if this switch is on] 00:05:07.778 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.778 Can be used to spread operations across a wider range of memory. 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:07.778 00:05:07.778 real 0m0.022s 00:05:07.778 user 0m0.013s 00:05:07.778 sys 0m0.009s 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.778 18:59:48 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:07.778 ************************************ 00:05:07.778 END TEST accel_negative_buffers 00:05:07.778 ************************************ 00:05:07.778 Error: writing output failed: Broken pipe 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.778 18:59:48 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.778 18:59:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.778 ************************************ 00:05:07.778 START TEST accel_crc32c 00:05:07.779 ************************************ 00:05:07.779 18:59:48 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:07.779 18:59:48 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:07.779 [2024-07-15 18:59:48.181584] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:07.779 [2024-07-15 18:59:48.181647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191193 ] 00:05:08.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.039 [2024-07-15 18:59:48.247004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.039 [2024-07-15 18:59:48.370725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:08.039 18:59:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:09.418 18:59:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.418 00:05:09.418 real 0m1.485s 00:05:09.418 user 0m1.344s 00:05:09.418 sys 0m0.142s 00:05:09.418 18:59:49 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.418 18:59:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:09.418 ************************************ 00:05:09.418 END TEST accel_crc32c 00:05:09.418 ************************************ 00:05:09.418 18:59:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:09.418 18:59:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:09.418 18:59:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:09.418 18:59:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.418 18:59:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.418 ************************************ 00:05:09.418 START TEST accel_crc32c_C2 00:05:09.418 ************************************ 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.418 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.419 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.419 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:09.419 [2024-07-15 18:59:49.710100] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:09.419 [2024-07-15 18:59:49.710174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191355 ] 00:05:09.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.419 [2024-07-15 18:59:49.772462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.678 [2024-07-15 18:59:49.889504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.678 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.679 18:59:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.060 00:05:11.060 real 0m1.471s 00:05:11.060 user 0m1.328s 00:05:11.060 sys 0m0.145s 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.060 18:59:51 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:11.060 ************************************ 00:05:11.060 END TEST accel_crc32c_C2 00:05:11.060 ************************************ 00:05:11.060 18:59:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.060 18:59:51 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:11.060 18:59:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:11.060 18:59:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.060 18:59:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.060 ************************************ 00:05:11.060 START TEST accel_copy 00:05:11.060 ************************************ 00:05:11.060 18:59:51 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:11.060 [2024-07-15 18:59:51.221632] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:11.060 [2024-07-15 18:59:51.221688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191627 ] 00:05:11.060 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.060 [2024-07-15 18:59:51.283096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.060 [2024-07-15 18:59:51.401429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.060 18:59:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:12.442 18:59:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.442 00:05:12.442 real 0m1.471s 00:05:12.442 user 0m1.333s 00:05:12.442 sys 0m0.140s 00:05:12.442 18:59:52 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.442 18:59:52 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:12.442 ************************************ 00:05:12.442 END TEST accel_copy 00:05:12.442 ************************************ 00:05:12.442 18:59:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.442 18:59:52 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.442 18:59:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:12.442 18:59:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.442 18:59:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.442 ************************************ 00:05:12.442 START TEST accel_fill 00:05:12.442 ************************************ 00:05:12.442 18:59:52 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:12.442 18:59:52 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:12.442 [2024-07-15 18:59:52.741392] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:12.442 [2024-07-15 18:59:52.741459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191785 ] 00:05:12.442 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.442 [2024-07-15 18:59:52.801898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.701 [2024-07-15 18:59:52.920070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.701 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.702 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.702 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.702 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:12.702 18:59:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:12.702 18:59:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:12.702 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:12.702 18:59:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.082 18:59:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:14.083 18:59:54 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.083 00:05:14.083 real 0m1.467s 00:05:14.083 user 0m1.317s 00:05:14.083 sys 0m0.152s 00:05:14.083 18:59:54 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.083 18:59:54 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:14.083 ************************************ 00:05:14.083 END TEST accel_fill 00:05:14.083 ************************************ 00:05:14.083 18:59:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.083 18:59:54 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:14.083 18:59:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:14.083 18:59:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.083 18:59:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.083 ************************************ 00:05:14.083 START TEST accel_copy_crc32c 00:05:14.083 ************************************ 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:14.083 [2024-07-15 18:59:54.252700] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:14.083 [2024-07-15 18:59:54.252777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191943 ] 00:05:14.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.083 [2024-07-15 18:59:54.318018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.083 [2024-07-15 18:59:54.436232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:14.083 18:59:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.492 00:05:15.492 real 0m1.475s 00:05:15.492 user 0m1.324s 00:05:15.492 sys 0m0.154s 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.492 18:59:55 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:15.492 ************************************ 00:05:15.492 END TEST accel_copy_crc32c 00:05:15.492 ************************************ 00:05:15.492 18:59:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:15.492 18:59:55 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.492 18:59:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:15.492 18:59:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.492 18:59:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.492 ************************************ 00:05:15.492 START TEST accel_copy_crc32c_C2 00:05:15.492 ************************************ 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.492 18:59:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:15.492 [2024-07-15 18:59:55.778240] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:15.492 [2024-07-15 18:59:55.778301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192221 ] 00:05:15.492 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.492 [2024-07-15 18:59:55.836732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.752 [2024-07-15 18:59:55.954614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.752 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:15.753 18:59:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.133 00:05:17.133 real 0m1.480s 00:05:17.133 user 0m1.332s 00:05:17.133 sys 0m0.150s 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.133 18:59:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:17.133 ************************************ 00:05:17.133 END TEST accel_copy_crc32c_C2 00:05:17.133 ************************************ 00:05:17.133 18:59:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:17.133 18:59:57 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:17.133 18:59:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:17.133 18:59:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.133 18:59:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.133 ************************************ 00:05:17.133 START TEST accel_dualcast 00:05:17.133 ************************************ 00:05:17.133 18:59:57 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:17.133 [2024-07-15 18:59:57.306296] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:17.133 [2024-07-15 18:59:57.306359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192378 ] 00:05:17.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.133 [2024-07-15 18:59:57.371111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.133 [2024-07-15 18:59:57.495919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:17.133 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.134 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:17.393 18:59:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:18.771 18:59:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.771 00:05:18.771 real 0m1.490s 00:05:18.771 user 0m1.340s 00:05:18.771 sys 0m0.152s 00:05:18.771 18:59:58 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.771 18:59:58 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:18.771 ************************************ 00:05:18.771 END TEST accel_dualcast 00:05:18.771 ************************************ 00:05:18.771 18:59:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.771 18:59:58 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:18.771 18:59:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:18.771 18:59:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.771 18:59:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.771 ************************************ 00:05:18.771 START TEST accel_compare 00:05:18.771 ************************************ 00:05:18.771 18:59:58 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:18.771 18:59:58 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:18.771 [2024-07-15 18:59:58.841538] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:18.771 [2024-07-15 18:59:58.841605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192603 ] 00:05:18.771 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.771 [2024-07-15 18:59:58.903654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.771 [2024-07-15 18:59:59.026558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:18.771 18:59:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:20.152 19:00:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.152 00:05:20.152 real 0m1.486s 00:05:20.152 user 0m1.339s 00:05:20.152 sys 0m0.149s 00:05:20.152 19:00:00 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.152 19:00:00 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:20.152 ************************************ 00:05:20.152 END TEST accel_compare 00:05:20.152 ************************************ 00:05:20.152 19:00:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.152 19:00:00 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:20.152 19:00:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:20.152 19:00:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.152 19:00:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.152 ************************************ 00:05:20.152 START TEST accel_xor 00:05:20.152 ************************************ 00:05:20.152 19:00:00 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:20.152 19:00:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:20.152 [2024-07-15 19:00:00.363507] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:20.152 [2024-07-15 19:00:00.363583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192845 ] 00:05:20.152 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.152 [2024-07-15 19:00:00.426012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.152 [2024-07-15 19:00:00.553071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.412 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:20.413 19:00:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.792 00:05:21.792 real 0m1.489s 00:05:21.792 user 0m1.343s 00:05:21.792 sys 0m0.146s 00:05:21.792 19:00:01 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.792 19:00:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:21.792 ************************************ 00:05:21.792 END TEST accel_xor 00:05:21.792 ************************************ 00:05:21.792 19:00:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.792 19:00:01 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:21.792 19:00:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:21.792 19:00:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.792 19:00:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.792 ************************************ 00:05:21.792 START TEST accel_xor 00:05:21.792 ************************************ 00:05:21.792 19:00:01 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:21.792 19:00:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:21.792 [2024-07-15 19:00:01.897101] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:21.792 [2024-07-15 19:00:01.897165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193081 ] 00:05:21.792 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.792 [2024-07-15 19:00:01.961635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.792 [2024-07-15 19:00:02.084707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:21.792 19:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.170 19:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.171 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.171 19:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.171 19:00:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.171 19:00:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:23.171 19:00:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.171 00:05:23.171 real 0m1.485s 00:05:23.171 user 0m1.341s 00:05:23.171 sys 0m0.146s 00:05:23.171 19:00:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.171 19:00:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:23.171 ************************************ 00:05:23.171 END TEST accel_xor 00:05:23.171 ************************************ 00:05:23.171 19:00:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.171 19:00:03 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:23.171 19:00:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:23.171 19:00:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.171 19:00:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.171 ************************************ 00:05:23.171 START TEST accel_dif_verify 00:05:23.171 ************************************ 00:05:23.171 19:00:03 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:23.171 19:00:03 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:23.171 [2024-07-15 19:00:03.427525] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:23.171 [2024-07-15 19:00:03.427593] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193359 ] 00:05:23.171 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.171 [2024-07-15 19:00:03.490749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.429 [2024-07-15 19:00:03.613486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.429 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:23.430 19:00:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:24.821 19:00:04 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.821 00:05:24.821 real 0m1.489s 00:05:24.821 user 0m1.352s 00:05:24.821 sys 0m0.141s 00:05:24.821 19:00:04 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.821 19:00:04 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:24.821 ************************************ 00:05:24.821 END TEST accel_dif_verify 00:05:24.821 ************************************ 00:05:24.821 19:00:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.821 19:00:04 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:24.821 19:00:04 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:24.821 19:00:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.821 19:00:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.821 ************************************ 00:05:24.821 START TEST accel_dif_generate 00:05:24.821 ************************************ 00:05:24.821 19:00:04 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:24.821 19:00:04 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:24.821 [2024-07-15 19:00:04.964287] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:24.821 [2024-07-15 19:00:04.964350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193529 ] 00:05:24.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.821 [2024-07-15 19:00:05.025807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.821 [2024-07-15 19:00:05.147633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.821 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:24.822 19:00:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:26.219 19:00:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.219 00:05:26.219 real 0m1.487s 00:05:26.219 user 0m1.350s 00:05:26.219 sys 0m0.141s 00:05:26.219 19:00:06 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.219 19:00:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:26.219 ************************************ 00:05:26.219 END TEST accel_dif_generate 00:05:26.219 ************************************ 00:05:26.219 19:00:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.219 19:00:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:26.219 19:00:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:26.219 19:00:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.219 19:00:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.219 ************************************ 00:05:26.219 START TEST accel_dif_generate_copy 00:05:26.219 ************************************ 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.219 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.220 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:26.220 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:26.220 [2024-07-15 19:00:06.496389] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:26.220 [2024-07-15 19:00:06.496459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193689 ] 00:05:26.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.220 [2024-07-15 19:00:06.559358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.479 [2024-07-15 19:00:06.682055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.479 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.480 19:00:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.855 00:05:27.855 real 0m1.477s 00:05:27.855 user 0m1.326s 00:05:27.855 sys 0m0.153s 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.855 19:00:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:27.855 ************************************ 00:05:27.855 END TEST accel_dif_generate_copy 00:05:27.855 ************************************ 00:05:27.855 19:00:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.855 19:00:07 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:27.855 19:00:07 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.855 19:00:07 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:27.855 19:00:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.855 19:00:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.855 ************************************ 00:05:27.855 START TEST accel_comp 00:05:27.855 ************************************ 00:05:27.855 19:00:07 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.856 19:00:07 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:27.856 19:00:07 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:27.856 19:00:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:07 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.856 19:00:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:27.856 [2024-07-15 19:00:08.017932] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:27.856 [2024-07-15 19:00:08.017999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194203 ] 00:05:27.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.856 [2024-07-15 19:00:08.080254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.856 [2024-07-15 19:00:08.200370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:27.856 19:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:29.226 19:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.226 00:05:29.226 real 0m1.474s 00:05:29.226 user 0m1.327s 00:05:29.226 sys 0m0.149s 00:05:29.226 19:00:09 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.226 19:00:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:29.226 ************************************ 00:05:29.226 END TEST accel_comp 00:05:29.226 ************************************ 00:05:29.226 19:00:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.226 19:00:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.226 19:00:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:29.226 19:00:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.226 19:00:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.226 ************************************ 00:05:29.226 START TEST accel_decomp 00:05:29.226 ************************************ 00:05:29.226 19:00:09 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.226 19:00:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:29.226 19:00:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:29.226 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.226 19:00:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.226 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.226 19:00:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:29.226 19:00:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:29.227 19:00:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.227 19:00:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.227 19:00:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.227 19:00:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.227 19:00:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.227 19:00:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:29.227 19:00:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:29.227 [2024-07-15 19:00:09.536253] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:29.227 [2024-07-15 19:00:09.536318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194620 ] 00:05:29.227 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.227 [2024-07-15 19:00:09.598236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.485 [2024-07-15 19:00:09.722924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.485 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:29.486 19:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:30.861 19:00:10 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.861 00:05:30.861 real 0m1.480s 00:05:30.861 user 0m1.338s 00:05:30.861 sys 0m0.145s 00:05:30.862 19:00:10 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.862 19:00:10 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:30.862 ************************************ 00:05:30.862 END TEST accel_decomp 00:05:30.862 ************************************ 00:05:30.862 19:00:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.862 19:00:11 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.862 19:00:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:30.862 19:00:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.862 19:00:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.862 ************************************ 00:05:30.862 START TEST accel_decomp_full 00:05:30.862 ************************************ 00:05:30.862 19:00:11 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:30.862 19:00:11 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:30.862 [2024-07-15 19:00:11.062031] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:30.862 [2024-07-15 19:00:11.062098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194781 ] 00:05:30.862 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.862 [2024-07-15 19:00:11.123717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.862 [2024-07-15 19:00:11.246273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.124 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:31.125 19:00:11 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.109 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:32.369 19:00:12 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.369 00:05:32.369 real 0m1.497s 00:05:32.369 user 0m1.365s 00:05:32.369 sys 0m0.134s 00:05:32.369 19:00:12 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.369 19:00:12 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:32.369 ************************************ 00:05:32.369 END TEST accel_decomp_full 00:05:32.369 ************************************ 00:05:32.369 19:00:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.369 19:00:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.369 19:00:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:32.369 19:00:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.369 19:00:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.369 ************************************ 00:05:32.369 START TEST accel_decomp_mcore 00:05:32.369 ************************************ 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:32.369 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:32.369 [2024-07-15 19:00:12.603055] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:32.369 [2024-07-15 19:00:12.603120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195057 ] 00:05:32.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.369 [2024-07-15 19:00:12.664205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.369 [2024-07-15 19:00:12.794354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.369 [2024-07-15 19:00:12.794410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.369 [2024-07-15 19:00:12.794460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.369 [2024-07-15 19:00:12.794464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:32.628 19:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.007 00:05:34.007 real 0m1.497s 00:05:34.007 user 0m4.791s 00:05:34.007 sys 0m0.162s 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.007 19:00:14 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:34.007 ************************************ 00:05:34.007 END TEST accel_decomp_mcore 00:05:34.007 ************************************ 00:05:34.007 19:00:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.007 19:00:14 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.007 19:00:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:34.007 19:00:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.007 19:00:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.007 ************************************ 00:05:34.007 START TEST accel_decomp_full_mcore 00:05:34.007 ************************************ 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:34.007 [2024-07-15 19:00:14.151801] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:34.007 [2024-07-15 19:00:14.151856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195216 ] 00:05:34.007 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.007 [2024-07-15 19:00:14.215212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.007 [2024-07-15 19:00:14.336288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.007 [2024-07-15 19:00:14.336344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.007 [2024-07-15 19:00:14.336400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.007 [2024-07-15 19:00:14.336403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.007 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.008 19:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.382 00:05:35.382 real 0m1.499s 00:05:35.382 user 0m4.825s 00:05:35.382 sys 0m0.154s 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.382 19:00:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:35.382 ************************************ 00:05:35.382 END TEST accel_decomp_full_mcore 00:05:35.382 ************************************ 00:05:35.382 19:00:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.382 19:00:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.382 19:00:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:35.382 19:00:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.382 19:00:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.382 ************************************ 00:05:35.382 START TEST accel_decomp_mthread 00:05:35.382 ************************************ 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:35.382 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:35.382 [2024-07-15 19:00:15.701850] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:35.382 [2024-07-15 19:00:15.701972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195383 ] 00:05:35.382 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.382 [2024-07-15 19:00:15.768270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.640 [2024-07-15 19:00:15.892040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:35.640 19:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.073 00:05:37.073 real 0m1.501s 00:05:37.073 user 0m1.356s 00:05:37.073 sys 0m0.149s 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.073 19:00:17 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:37.073 ************************************ 00:05:37.073 END TEST accel_decomp_mthread 00:05:37.073 ************************************ 00:05:37.073 19:00:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.073 19:00:17 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.073 19:00:17 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:37.073 19:00:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.073 19:00:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.073 ************************************ 00:05:37.073 START TEST accel_decomp_full_mthread 00:05:37.073 ************************************ 00:05:37.073 19:00:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.073 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:37.073 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:37.073 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:37.074 [2024-07-15 19:00:17.250734] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:37.074 [2024-07-15 19:00:17.250804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195651 ] 00:05:37.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.074 [2024-07-15 19:00:17.313976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.074 [2024-07-15 19:00:17.435396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.074 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.332 19:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.711 00:05:38.711 real 0m1.526s 00:05:38.711 user 0m1.381s 00:05:38.711 sys 0m0.148s 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.711 19:00:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:38.711 ************************************ 00:05:38.711 END TEST accel_decomp_full_mthread 00:05:38.711 ************************************ 00:05:38.711 19:00:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.711 19:00:18 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:38.711 19:00:18 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.711 19:00:18 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:38.711 19:00:18 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:38.711 19:00:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.711 19:00:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.711 19:00:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.711 19:00:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.711 19:00:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.711 19:00:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.711 19:00:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.711 19:00:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:38.711 19:00:18 accel -- accel/accel.sh@41 -- # jq -r . 00:05:38.711 ************************************ 00:05:38.711 START TEST accel_dif_functional_tests 00:05:38.711 ************************************ 00:05:38.711 19:00:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:38.711 [2024-07-15 19:00:18.841523] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:38.711 [2024-07-15 19:00:18.841585] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195818 ] 00:05:38.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.711 [2024-07-15 19:00:18.902725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.711 [2024-07-15 19:00:19.028786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.711 [2024-07-15 19:00:19.028841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.711 [2024-07-15 19:00:19.028844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.711 00:05:38.711 00:05:38.711 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.711 http://cunit.sourceforge.net/ 00:05:38.711 00:05:38.711 00:05:38.711 Suite: accel_dif 00:05:38.711 Test: verify: DIF generated, GUARD check ...passed 00:05:38.711 Test: verify: DIF generated, APPTAG check ...passed 00:05:38.711 Test: verify: DIF generated, REFTAG check ...passed 00:05:38.711 Test: verify: DIF not generated, GUARD check ...[2024-07-15 19:00:19.131311] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:38.711 passed 00:05:38.711 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 19:00:19.131392] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:38.711 passed 00:05:38.711 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 19:00:19.131430] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:38.711 passed 00:05:38.711 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:38.711 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 19:00:19.131510] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:38.711 passed 00:05:38.711 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:38.711 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:38.711 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:38.711 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 19:00:19.131671] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:38.711 passed 00:05:38.711 Test: verify copy: DIF generated, GUARD check ...passed 00:05:38.711 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:38.711 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:38.711 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 19:00:19.131849] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:38.711 passed 00:05:38.711 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 19:00:19.131906] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:38.711 passed 00:05:38.711 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 19:00:19.131948] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:38.711 passed 00:05:38.711 Test: generate copy: DIF generated, GUARD check ...passed 00:05:38.711 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:38.711 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:38.711 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:38.711 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:38.711 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:38.711 Test: generate copy: iovecs-len validate ...[2024-07-15 19:00:19.132206] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:38.711 passed 00:05:38.712 Test: generate copy: buffer alignment validate ...passed 00:05:38.712 00:05:38.712 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.712 suites 1 1 n/a 0 0 00:05:38.712 tests 26 26 26 0 0 00:05:38.712 asserts 115 115 115 0 n/a 00:05:38.712 00:05:38.712 Elapsed time = 0.003 seconds 00:05:38.969 00:05:38.969 real 0m0.591s 00:05:38.969 user 0m0.878s 00:05:38.969 sys 0m0.190s 00:05:38.969 19:00:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.969 19:00:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:38.969 ************************************ 00:05:38.969 END TEST accel_dif_functional_tests 00:05:38.969 ************************************ 00:05:39.228 19:00:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.228 00:05:39.228 real 0m34.072s 00:05:39.228 user 0m37.615s 00:05:39.228 sys 0m4.691s 00:05:39.228 19:00:19 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.228 19:00:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.228 ************************************ 00:05:39.228 END TEST accel 00:05:39.228 ************************************ 00:05:39.228 19:00:19 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.228 19:00:19 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:39.228 19:00:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.228 19:00:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.228 19:00:19 -- common/autotest_common.sh@10 -- # set +x 00:05:39.228 ************************************ 00:05:39.228 START TEST accel_rpc 00:05:39.228 ************************************ 00:05:39.228 19:00:19 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:39.228 * Looking for test storage... 00:05:39.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:39.228 19:00:19 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.228 19:00:19 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3195997 00:05:39.228 19:00:19 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:39.228 19:00:19 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3195997 00:05:39.228 19:00:19 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3195997 ']' 00:05:39.228 19:00:19 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.228 19:00:19 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.228 19:00:19 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.228 19:00:19 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.228 19:00:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.228 [2024-07-15 19:00:19.572356] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:39.228 [2024-07-15 19:00:19.572426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195997 ] 00:05:39.228 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.228 [2024-07-15 19:00:19.631281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.487 [2024-07-15 19:00:19.743473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.487 19:00:19 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.487 19:00:19 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:39.487 19:00:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:39.487 19:00:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:39.487 19:00:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:39.487 19:00:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:39.487 19:00:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:39.487 19:00:19 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.487 19:00:19 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.487 19:00:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.487 ************************************ 00:05:39.487 START TEST accel_assign_opcode 00:05:39.487 ************************************ 00:05:39.487 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:39.487 19:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:39.487 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.487 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.487 [2024-07-15 19:00:19.808078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.488 [2024-07-15 19:00:19.816082] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.488 19:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.746 software 00:05:39.746 00:05:39.746 real 0m0.300s 00:05:39.746 user 0m0.035s 00:05:39.746 sys 0m0.006s 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.746 19:00:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:39.746 ************************************ 00:05:39.746 END TEST accel_assign_opcode 00:05:39.746 ************************************ 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:39.746 19:00:20 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3195997 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3195997 ']' 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3195997 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3195997 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3195997' 00:05:39.746 killing process with pid 3195997 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@967 -- # kill 3195997 00:05:39.746 19:00:20 accel_rpc -- common/autotest_common.sh@972 -- # wait 3195997 00:05:40.311 00:05:40.311 real 0m1.155s 00:05:40.311 user 0m1.068s 00:05:40.311 sys 0m0.433s 00:05:40.311 19:00:20 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.311 19:00:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.311 ************************************ 00:05:40.311 END TEST accel_rpc 00:05:40.311 ************************************ 00:05:40.311 19:00:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:40.311 19:00:20 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:40.311 19:00:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.311 19:00:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.311 19:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:40.311 ************************************ 00:05:40.311 START TEST app_cmdline 00:05:40.311 ************************************ 00:05:40.311 19:00:20 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:40.311 * Looking for test storage... 00:05:40.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:40.311 19:00:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:40.311 19:00:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3196209 00:05:40.311 19:00:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:40.311 19:00:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3196209 00:05:40.311 19:00:20 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3196209 ']' 00:05:40.311 19:00:20 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.311 19:00:20 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.311 19:00:20 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.311 19:00:20 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.311 19:00:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.570 [2024-07-15 19:00:20.781840] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:05:40.570 [2024-07-15 19:00:20.781968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196209 ] 00:05:40.570 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.570 [2024-07-15 19:00:20.839404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.570 [2024-07-15 19:00:20.944361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.827 19:00:21 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.827 19:00:21 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:40.827 19:00:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:41.084 { 00:05:41.084 "version": "SPDK v24.09-pre git sha1 a22f117fe", 00:05:41.084 "fields": { 00:05:41.084 "major": 24, 00:05:41.084 "minor": 9, 00:05:41.084 "patch": 0, 00:05:41.084 "suffix": "-pre", 00:05:41.084 "commit": "a22f117fe" 00:05:41.084 } 00:05:41.084 } 00:05:41.084 19:00:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:41.084 19:00:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:41.084 19:00:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:41.084 19:00:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:41.084 19:00:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:41.084 19:00:21 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.084 19:00:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:41.084 19:00:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:41.084 19:00:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:41.084 19:00:21 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.343 19:00:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:41.343 19:00:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:41.343 19:00:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:41.343 19:00:21 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.343 request: 00:05:41.343 { 00:05:41.343 "method": "env_dpdk_get_mem_stats", 00:05:41.343 "req_id": 1 00:05:41.343 } 00:05:41.343 Got JSON-RPC error response 00:05:41.343 response: 00:05:41.343 { 00:05:41.343 "code": -32601, 00:05:41.343 "message": "Method not found" 00:05:41.343 } 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.603 19:00:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3196209 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3196209 ']' 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3196209 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3196209 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3196209' 00:05:41.603 killing process with pid 3196209 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@967 -- # kill 3196209 00:05:41.603 19:00:21 app_cmdline -- common/autotest_common.sh@972 -- # wait 3196209 00:05:41.862 00:05:41.862 real 0m1.607s 00:05:41.862 user 0m1.985s 00:05:41.862 sys 0m0.452s 00:05:41.862 19:00:22 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.862 19:00:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:41.862 ************************************ 00:05:41.862 END TEST app_cmdline 00:05:41.862 ************************************ 00:05:42.120 19:00:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.120 19:00:22 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:42.120 19:00:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.120 19:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.120 19:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:42.120 ************************************ 00:05:42.120 START TEST version 00:05:42.120 ************************************ 00:05:42.120 19:00:22 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:42.120 * Looking for test storage... 00:05:42.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:42.120 19:00:22 version -- app/version.sh@17 -- # get_header_version major 00:05:42.120 19:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.120 19:00:22 version -- app/version.sh@17 -- # major=24 00:05:42.120 19:00:22 version -- app/version.sh@18 -- # get_header_version minor 00:05:42.120 19:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.120 19:00:22 version -- app/version.sh@18 -- # minor=9 00:05:42.120 19:00:22 version -- app/version.sh@19 -- # get_header_version patch 00:05:42.120 19:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.120 19:00:22 version -- app/version.sh@19 -- # patch=0 00:05:42.120 19:00:22 version -- app/version.sh@20 -- # get_header_version suffix 00:05:42.120 19:00:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # cut -f2 00:05:42.120 19:00:22 version -- app/version.sh@14 -- # tr -d '"' 00:05:42.120 19:00:22 version -- app/version.sh@20 -- # suffix=-pre 00:05:42.120 19:00:22 version -- app/version.sh@22 -- # version=24.9 00:05:42.120 19:00:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:42.120 19:00:22 version -- app/version.sh@28 -- # version=24.9rc0 00:05:42.120 19:00:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:42.120 19:00:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:42.120 19:00:22 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:42.120 19:00:22 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:42.120 00:05:42.120 real 0m0.102s 00:05:42.120 user 0m0.050s 00:05:42.120 sys 0m0.074s 00:05:42.120 19:00:22 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.120 19:00:22 version -- common/autotest_common.sh@10 -- # set +x 00:05:42.120 ************************************ 00:05:42.120 END TEST version 00:05:42.120 ************************************ 00:05:42.120 19:00:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.120 19:00:22 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@198 -- # uname -s 00:05:42.120 19:00:22 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:42.120 19:00:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:42.120 19:00:22 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:42.120 19:00:22 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:42.120 19:00:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.120 19:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:42.120 19:00:22 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:42.120 19:00:22 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:42.120 19:00:22 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:42.120 19:00:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:42.120 19:00:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.120 19:00:22 -- common/autotest_common.sh@10 -- # set +x 00:05:42.120 ************************************ 00:05:42.120 START TEST nvmf_tcp 00:05:42.120 ************************************ 00:05:42.120 19:00:22 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:42.379 * Looking for test storage... 00:05:42.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.379 19:00:22 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.379 19:00:22 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.379 19:00:22 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.379 19:00:22 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:42.379 19:00:22 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:42.379 19:00:22 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.379 19:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:42.379 19:00:22 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:42.379 19:00:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:42.379 19:00:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.379 19:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.379 ************************************ 00:05:42.379 START TEST nvmf_example 00:05:42.379 ************************************ 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:42.379 * Looking for test storage... 00:05:42.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:42.379 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:42.380 19:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:44.283 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:44.284 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:44.284 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:44.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:44.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:44.284 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:44.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:44.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:05:44.543 00:05:44.543 --- 10.0.0.2 ping statistics --- 00:05:44.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.543 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:44.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:44.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:05:44.543 00:05:44.543 --- 10.0.0.1 ping statistics --- 00:05:44.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.543 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3198130 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3198130 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3198130 ']' 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.543 19:00:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:44.543 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.480 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:45.740 19:00:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:45.740 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.743 Initializing NVMe Controllers 00:05:55.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:55.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:55.743 Initialization complete. Launching workers. 00:05:55.743 ======================================================== 00:05:55.743 Latency(us) 00:05:55.743 Device Information : IOPS MiB/s Average min max 00:05:55.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14294.90 55.84 4477.35 887.14 16145.70 00:05:55.743 ======================================================== 00:05:55.743 Total : 14294.90 55.84 4477.35 887.14 16145.70 00:05:55.743 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:55.743 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:55.743 rmmod nvme_tcp 00:05:55.743 rmmod nvme_fabrics 00:05:55.743 rmmod nvme_keyring 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3198130 ']' 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3198130 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3198130 ']' 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3198130 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3198130 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3198130' 00:05:56.008 killing process with pid 3198130 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3198130 00:05:56.008 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3198130 00:05:56.268 nvmf threads initialize successfully 00:05:56.268 bdev subsystem init successfully 00:05:56.268 created a nvmf target service 00:05:56.268 create targets's poll groups done 00:05:56.268 all subsystems of target started 00:05:56.268 nvmf target is running 00:05:56.268 all subsystems of target stopped 00:05:56.268 destroy targets's poll groups done 00:05:56.268 destroyed the nvmf target service 00:05:56.268 bdev subsystem finish successfully 00:05:56.268 nvmf threads destroy successfully 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:56.268 19:00:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.169 19:00:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:58.169 19:00:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:58.169 19:00:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.169 19:00:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.169 00:05:58.169 real 0m15.936s 00:05:58.169 user 0m45.020s 00:05:58.169 sys 0m3.351s 00:05:58.169 19:00:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.169 19:00:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:58.169 ************************************ 00:05:58.169 END TEST nvmf_example 00:05:58.169 ************************************ 00:05:58.169 19:00:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:05:58.169 19:00:38 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:58.169 19:00:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:58.169 19:00:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.169 19:00:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.169 ************************************ 00:05:58.169 START TEST nvmf_filesystem 00:05:58.169 ************************************ 00:05:58.169 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:58.429 * Looking for test storage... 00:05:58.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:58.429 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:58.430 #define SPDK_CONFIG_H 00:05:58.430 #define SPDK_CONFIG_APPS 1 00:05:58.430 #define SPDK_CONFIG_ARCH native 00:05:58.430 #undef SPDK_CONFIG_ASAN 00:05:58.430 #undef SPDK_CONFIG_AVAHI 00:05:58.430 #undef SPDK_CONFIG_CET 00:05:58.430 #define SPDK_CONFIG_COVERAGE 1 00:05:58.430 #define SPDK_CONFIG_CROSS_PREFIX 00:05:58.430 #undef SPDK_CONFIG_CRYPTO 00:05:58.430 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:58.430 #undef SPDK_CONFIG_CUSTOMOCF 00:05:58.430 #undef SPDK_CONFIG_DAOS 00:05:58.430 #define SPDK_CONFIG_DAOS_DIR 00:05:58.430 #define SPDK_CONFIG_DEBUG 1 00:05:58.430 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:58.430 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:58.430 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:58.430 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:58.430 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:58.430 #undef SPDK_CONFIG_DPDK_UADK 00:05:58.430 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:58.430 #define SPDK_CONFIG_EXAMPLES 1 00:05:58.430 #undef SPDK_CONFIG_FC 00:05:58.430 #define SPDK_CONFIG_FC_PATH 00:05:58.430 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:58.430 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:58.430 #undef SPDK_CONFIG_FUSE 00:05:58.430 #undef SPDK_CONFIG_FUZZER 00:05:58.430 #define SPDK_CONFIG_FUZZER_LIB 00:05:58.430 #undef SPDK_CONFIG_GOLANG 00:05:58.430 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:58.430 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:58.430 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:58.430 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:58.430 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:58.430 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:58.430 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:58.430 #define SPDK_CONFIG_IDXD 1 00:05:58.430 #define SPDK_CONFIG_IDXD_KERNEL 1 00:05:58.430 #undef SPDK_CONFIG_IPSEC_MB 00:05:58.430 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:58.430 #define SPDK_CONFIG_ISAL 1 00:05:58.430 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:58.430 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:58.430 #define SPDK_CONFIG_LIBDIR 00:05:58.430 #undef SPDK_CONFIG_LTO 00:05:58.430 #define SPDK_CONFIG_MAX_LCORES 128 00:05:58.430 #define SPDK_CONFIG_NVME_CUSE 1 00:05:58.430 #undef SPDK_CONFIG_OCF 00:05:58.430 #define SPDK_CONFIG_OCF_PATH 00:05:58.430 #define SPDK_CONFIG_OPENSSL_PATH 00:05:58.430 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:58.430 #define SPDK_CONFIG_PGO_DIR 00:05:58.430 #undef SPDK_CONFIG_PGO_USE 00:05:58.430 #define SPDK_CONFIG_PREFIX /usr/local 00:05:58.430 #undef SPDK_CONFIG_RAID5F 00:05:58.430 #undef SPDK_CONFIG_RBD 00:05:58.430 #define SPDK_CONFIG_RDMA 1 00:05:58.430 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:58.430 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:58.430 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:58.430 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:58.430 #define SPDK_CONFIG_SHARED 1 00:05:58.430 #undef SPDK_CONFIG_SMA 00:05:58.430 #define SPDK_CONFIG_TESTS 1 00:05:58.430 #undef SPDK_CONFIG_TSAN 00:05:58.430 #define SPDK_CONFIG_UBLK 1 00:05:58.430 #define SPDK_CONFIG_UBSAN 1 00:05:58.430 #undef SPDK_CONFIG_UNIT_TESTS 00:05:58.430 #undef SPDK_CONFIG_URING 00:05:58.430 #define SPDK_CONFIG_URING_PATH 00:05:58.430 #undef SPDK_CONFIG_URING_ZNS 00:05:58.430 #undef SPDK_CONFIG_USDT 00:05:58.430 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:58.430 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:58.430 #define SPDK_CONFIG_VFIO_USER 1 00:05:58.430 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:58.430 #define SPDK_CONFIG_VHOST 1 00:05:58.430 #define SPDK_CONFIG_VIRTIO 1 00:05:58.430 #undef SPDK_CONFIG_VTUNE 00:05:58.430 #define SPDK_CONFIG_VTUNE_DIR 00:05:58.430 #define SPDK_CONFIG_WERROR 1 00:05:58.430 #define SPDK_CONFIG_WPDK_DIR 00:05:58.430 #undef SPDK_CONFIG_XNVME 00:05:58.430 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.430 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:05:58.431 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3199930 ]] 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3199930 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.SeKyTX 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.SeKyTX/tests/target /tmp/spdk.SeKyTX 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:05:58.432 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55522566144 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994692608 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6472126464 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941708288 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997344256 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390178816 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398940160 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996189184 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997348352 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1159168 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:05:58.433 * Looking for test storage... 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55522566144 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8686718976 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.433 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:58.434 19:00:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.972 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:00.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:00.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:00.973 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:00.973 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:00.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:06:00.973 00:06:00.973 --- 10.0.0.2 ping statistics --- 00:06:00.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.973 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:06:00.973 00:06:00.973 --- 10.0.0.1 ping statistics --- 00:06:00.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.973 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.973 19:00:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.973 ************************************ 00:06:00.973 START TEST nvmf_filesystem_no_in_capsule 00:06:00.973 ************************************ 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3201559 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3201559 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3201559 ']' 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.973 19:00:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.973 [2024-07-15 19:00:41.069985] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:00.973 [2024-07-15 19:00:41.070071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.973 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.973 [2024-07-15 19:00:41.143047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.973 [2024-07-15 19:00:41.266698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.973 [2024-07-15 19:00:41.266757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.973 [2024-07-15 19:00:41.266782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.973 [2024-07-15 19:00:41.266796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.974 [2024-07-15 19:00:41.266808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.974 [2024-07-15 19:00:41.266902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.974 [2024-07-15 19:00:41.266958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.974 [2024-07-15 19:00:41.267013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.974 [2024-07-15 19:00:41.267016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.910 [2024-07-15 19:00:42.055078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.910 Malloc1 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.910 [2024-07-15 19:00:42.239195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:01.910 { 00:06:01.910 "name": "Malloc1", 00:06:01.910 "aliases": [ 00:06:01.910 "4864496a-52cd-4d8b-8a83-3749a29b8fcd" 00:06:01.910 ], 00:06:01.910 "product_name": "Malloc disk", 00:06:01.910 "block_size": 512, 00:06:01.910 "num_blocks": 1048576, 00:06:01.910 "uuid": "4864496a-52cd-4d8b-8a83-3749a29b8fcd", 00:06:01.910 "assigned_rate_limits": { 00:06:01.910 "rw_ios_per_sec": 0, 00:06:01.910 "rw_mbytes_per_sec": 0, 00:06:01.910 "r_mbytes_per_sec": 0, 00:06:01.910 "w_mbytes_per_sec": 0 00:06:01.910 }, 00:06:01.910 "claimed": true, 00:06:01.910 "claim_type": "exclusive_write", 00:06:01.910 "zoned": false, 00:06:01.910 "supported_io_types": { 00:06:01.910 "read": true, 00:06:01.910 "write": true, 00:06:01.910 "unmap": true, 00:06:01.910 "flush": true, 00:06:01.910 "reset": true, 00:06:01.910 "nvme_admin": false, 00:06:01.910 "nvme_io": false, 00:06:01.910 "nvme_io_md": false, 00:06:01.910 "write_zeroes": true, 00:06:01.910 "zcopy": true, 00:06:01.910 "get_zone_info": false, 00:06:01.910 "zone_management": false, 00:06:01.910 "zone_append": false, 00:06:01.910 "compare": false, 00:06:01.910 "compare_and_write": false, 00:06:01.910 "abort": true, 00:06:01.910 "seek_hole": false, 00:06:01.910 "seek_data": false, 00:06:01.910 "copy": true, 00:06:01.910 "nvme_iov_md": false 00:06:01.910 }, 00:06:01.910 "memory_domains": [ 00:06:01.910 { 00:06:01.910 "dma_device_id": "system", 00:06:01.910 "dma_device_type": 1 00:06:01.910 }, 00:06:01.910 { 00:06:01.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.910 "dma_device_type": 2 00:06:01.910 } 00:06:01.910 ], 00:06:01.910 "driver_specific": {} 00:06:01.910 } 00:06:01.910 ]' 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:01.910 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:02.850 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:02.850 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:02.850 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:02.850 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:02.850 19:00:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:04.773 19:00:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:04.773 19:00:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:05.713 19:00:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:07.092 ************************************ 00:06:07.092 START TEST filesystem_ext4 00:06:07.092 ************************************ 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:07.092 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:07.093 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:07.093 19:00:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:07.093 mke2fs 1.46.5 (30-Dec-2021) 00:06:07.093 Discarding device blocks: 0/522240 done 00:06:07.093 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:07.093 Filesystem UUID: 368807a4-8237-4467-b7bd-e413aecef36a 00:06:07.093 Superblock backups stored on blocks: 00:06:07.093 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:07.093 00:06:07.093 Allocating group tables: 0/64 done 00:06:07.093 Writing inode tables: 0/64 done 00:06:09.630 Creating journal (8192 blocks): done 00:06:09.630 Writing superblocks and filesystem accounting information: 0/64 done 00:06:09.630 00:06:09.630 19:00:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:09.630 19:00:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3201559 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:10.599 00:06:10.599 real 0m3.699s 00:06:10.599 user 0m0.013s 00:06:10.599 sys 0m0.057s 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:10.599 ************************************ 00:06:10.599 END TEST filesystem_ext4 00:06:10.599 ************************************ 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:10.599 ************************************ 00:06:10.599 START TEST filesystem_btrfs 00:06:10.599 ************************************ 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:10.599 19:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:10.858 btrfs-progs v6.6.2 00:06:10.858 See https://btrfs.readthedocs.io for more information. 00:06:10.858 00:06:10.858 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:10.858 NOTE: several default settings have changed in version 5.15, please make sure 00:06:10.858 this does not affect your deployments: 00:06:10.858 - DUP for metadata (-m dup) 00:06:10.858 - enabled no-holes (-O no-holes) 00:06:10.858 - enabled free-space-tree (-R free-space-tree) 00:06:10.858 00:06:10.858 Label: (null) 00:06:10.858 UUID: 0a258c70-96bb-4e47-83f0-9c1808d60ff5 00:06:10.858 Node size: 16384 00:06:10.858 Sector size: 4096 00:06:10.858 Filesystem size: 510.00MiB 00:06:10.858 Block group profiles: 00:06:10.858 Data: single 8.00MiB 00:06:10.858 Metadata: DUP 32.00MiB 00:06:10.858 System: DUP 8.00MiB 00:06:10.858 SSD detected: yes 00:06:10.858 Zoned device: no 00:06:10.858 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:10.858 Runtime features: free-space-tree 00:06:10.858 Checksum: crc32c 00:06:10.858 Number of devices: 1 00:06:10.858 Devices: 00:06:10.858 ID SIZE PATH 00:06:10.858 1 510.00MiB /dev/nvme0n1p1 00:06:10.858 00:06:10.858 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:10.858 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3201559 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.118 00:06:11.118 real 0m0.550s 00:06:11.118 user 0m0.025s 00:06:11.118 sys 0m0.105s 00:06:11.118 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:11.119 ************************************ 00:06:11.119 END TEST filesystem_btrfs 00:06:11.119 ************************************ 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.119 ************************************ 00:06:11.119 START TEST filesystem_xfs 00:06:11.119 ************************************ 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:11.119 19:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:11.378 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:11.378 = sectsz=512 attr=2, projid32bit=1 00:06:11.378 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:11.378 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:11.378 data = bsize=4096 blocks=130560, imaxpct=25 00:06:11.378 = sunit=0 swidth=0 blks 00:06:11.378 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:11.378 log =internal log bsize=4096 blocks=16384, version=2 00:06:11.378 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:11.378 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:12.318 Discarding blocks...Done. 00:06:12.318 19:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:12.318 19:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3201559 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:14.242 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:14.243 00:06:14.243 real 0m2.744s 00:06:14.243 user 0m0.011s 00:06:14.243 sys 0m0.065s 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:14.243 ************************************ 00:06:14.243 END TEST filesystem_xfs 00:06:14.243 ************************************ 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:14.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3201559 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3201559 ']' 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3201559 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3201559 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3201559' 00:06:14.243 killing process with pid 3201559 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3201559 00:06:14.243 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3201559 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:14.813 00:06:14.813 real 0m13.956s 00:06:14.813 user 0m53.650s 00:06:14.813 sys 0m1.971s 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.813 ************************************ 00:06:14.813 END TEST nvmf_filesystem_no_in_capsule 00:06:14.813 ************************************ 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.813 19:00:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.813 ************************************ 00:06:14.813 START TEST nvmf_filesystem_in_capsule 00:06:14.813 ************************************ 00:06:14.813 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:14.813 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3203394 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3203394 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3203394 ']' 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.814 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:14.814 [2024-07-15 19:00:55.072724] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:14.814 [2024-07-15 19:00:55.072804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.814 [2024-07-15 19:00:55.136017] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.075 [2024-07-15 19:00:55.247076] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.075 [2024-07-15 19:00:55.247132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.075 [2024-07-15 19:00:55.247155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.075 [2024-07-15 19:00:55.247166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.075 [2024-07-15 19:00:55.247176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.075 [2024-07-15 19:00:55.247228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.075 [2024-07-15 19:00:55.247290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.075 [2024-07-15 19:00:55.247356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.075 [2024-07-15 19:00:55.247359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.075 [2024-07-15 19:00:55.390517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.075 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.335 Malloc1 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.335 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.336 [2024-07-15 19:00:55.566270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:15.336 { 00:06:15.336 "name": "Malloc1", 00:06:15.336 "aliases": [ 00:06:15.336 "850e0dc9-fb6c-4ace-8e7c-ca1e9607ee2d" 00:06:15.336 ], 00:06:15.336 "product_name": "Malloc disk", 00:06:15.336 "block_size": 512, 00:06:15.336 "num_blocks": 1048576, 00:06:15.336 "uuid": "850e0dc9-fb6c-4ace-8e7c-ca1e9607ee2d", 00:06:15.336 "assigned_rate_limits": { 00:06:15.336 "rw_ios_per_sec": 0, 00:06:15.336 "rw_mbytes_per_sec": 0, 00:06:15.336 "r_mbytes_per_sec": 0, 00:06:15.336 "w_mbytes_per_sec": 0 00:06:15.336 }, 00:06:15.336 "claimed": true, 00:06:15.336 "claim_type": "exclusive_write", 00:06:15.336 "zoned": false, 00:06:15.336 "supported_io_types": { 00:06:15.336 "read": true, 00:06:15.336 "write": true, 00:06:15.336 "unmap": true, 00:06:15.336 "flush": true, 00:06:15.336 "reset": true, 00:06:15.336 "nvme_admin": false, 00:06:15.336 "nvme_io": false, 00:06:15.336 "nvme_io_md": false, 00:06:15.336 "write_zeroes": true, 00:06:15.336 "zcopy": true, 00:06:15.336 "get_zone_info": false, 00:06:15.336 "zone_management": false, 00:06:15.336 "zone_append": false, 00:06:15.336 "compare": false, 00:06:15.336 "compare_and_write": false, 00:06:15.336 "abort": true, 00:06:15.336 "seek_hole": false, 00:06:15.336 "seek_data": false, 00:06:15.336 "copy": true, 00:06:15.336 "nvme_iov_md": false 00:06:15.336 }, 00:06:15.336 "memory_domains": [ 00:06:15.336 { 00:06:15.336 "dma_device_id": "system", 00:06:15.336 "dma_device_type": 1 00:06:15.336 }, 00:06:15.336 { 00:06:15.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.336 "dma_device_type": 2 00:06:15.336 } 00:06:15.336 ], 00:06:15.336 "driver_specific": {} 00:06:15.336 } 00:06:15.336 ]' 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:15.336 19:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:16.275 19:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:16.275 19:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:16.275 19:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:16.275 19:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:16.275 19:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:18.183 19:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:18.750 19:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.125 ************************************ 00:06:20.125 START TEST filesystem_in_capsule_ext4 00:06:20.125 ************************************ 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:20.125 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:20.126 19:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:20.126 mke2fs 1.46.5 (30-Dec-2021) 00:06:20.126 Discarding device blocks: 0/522240 done 00:06:20.126 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:20.126 Filesystem UUID: 0c56f3df-cfd8-4763-afa5-c69d780f5757 00:06:20.126 Superblock backups stored on blocks: 00:06:20.126 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:20.126 00:06:20.126 Allocating group tables: 0/64 done 00:06:20.126 Writing inode tables: 0/64 done 00:06:20.126 Creating journal (8192 blocks): done 00:06:21.205 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:06:21.206 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:21.206 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3203394 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.464 00:06:21.464 real 0m1.482s 00:06:21.464 user 0m0.014s 00:06:21.464 sys 0m0.062s 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:21.464 ************************************ 00:06:21.464 END TEST filesystem_in_capsule_ext4 00:06:21.464 ************************************ 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.464 ************************************ 00:06:21.464 START TEST filesystem_in_capsule_btrfs 00:06:21.464 ************************************ 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:21.464 19:01:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:21.723 btrfs-progs v6.6.2 00:06:21.723 See https://btrfs.readthedocs.io for more information. 00:06:21.723 00:06:21.723 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:21.723 NOTE: several default settings have changed in version 5.15, please make sure 00:06:21.723 this does not affect your deployments: 00:06:21.723 - DUP for metadata (-m dup) 00:06:21.723 - enabled no-holes (-O no-holes) 00:06:21.723 - enabled free-space-tree (-R free-space-tree) 00:06:21.723 00:06:21.723 Label: (null) 00:06:21.723 UUID: d285d2a8-b542-4f54-9f7f-7d340ffeed2e 00:06:21.723 Node size: 16384 00:06:21.723 Sector size: 4096 00:06:21.723 Filesystem size: 510.00MiB 00:06:21.723 Block group profiles: 00:06:21.723 Data: single 8.00MiB 00:06:21.723 Metadata: DUP 32.00MiB 00:06:21.723 System: DUP 8.00MiB 00:06:21.723 SSD detected: yes 00:06:21.723 Zoned device: no 00:06:21.723 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:21.723 Runtime features: free-space-tree 00:06:21.723 Checksum: crc32c 00:06:21.723 Number of devices: 1 00:06:21.723 Devices: 00:06:21.723 ID SIZE PATH 00:06:21.723 1 510.00MiB /dev/nvme0n1p1 00:06:21.723 00:06:21.723 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:21.723 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3203394 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:22.289 00:06:22.289 real 0m0.751s 00:06:22.289 user 0m0.011s 00:06:22.289 sys 0m0.122s 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:22.289 ************************************ 00:06:22.289 END TEST filesystem_in_capsule_btrfs 00:06:22.289 ************************************ 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.289 ************************************ 00:06:22.289 START TEST filesystem_in_capsule_xfs 00:06:22.289 ************************************ 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:22.289 19:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:22.289 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:22.289 = sectsz=512 attr=2, projid32bit=1 00:06:22.289 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:22.289 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:22.289 data = bsize=4096 blocks=130560, imaxpct=25 00:06:22.289 = sunit=0 swidth=0 blks 00:06:22.289 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:22.289 log =internal log bsize=4096 blocks=16384, version=2 00:06:22.289 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:22.289 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:23.227 Discarding blocks...Done. 00:06:23.227 19:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:23.227 19:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3203394 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:25.130 00:06:25.130 real 0m2.909s 00:06:25.130 user 0m0.016s 00:06:25.130 sys 0m0.059s 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:25.130 ************************************ 00:06:25.130 END TEST filesystem_in_capsule_xfs 00:06:25.130 ************************************ 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:25.130 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:25.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:25.389 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:25.389 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:25.389 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:25.389 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:25.389 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3203394 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3203394 ']' 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3203394 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3203394 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3203394' 00:06:25.390 killing process with pid 3203394 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3203394 00:06:25.390 19:01:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3203394 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:25.992 00:06:25.992 real 0m11.110s 00:06:25.992 user 0m42.512s 00:06:25.992 sys 0m1.634s 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:25.992 ************************************ 00:06:25.992 END TEST nvmf_filesystem_in_capsule 00:06:25.992 ************************************ 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:25.992 rmmod nvme_tcp 00:06:25.992 rmmod nvme_fabrics 00:06:25.992 rmmod nvme_keyring 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.992 19:01:06 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.901 19:01:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:27.901 00:06:27.901 real 0m29.666s 00:06:27.901 user 1m37.124s 00:06:27.901 sys 0m5.243s 00:06:27.901 19:01:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.901 19:01:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.901 ************************************ 00:06:27.901 END TEST nvmf_filesystem 00:06:27.901 ************************************ 00:06:27.901 19:01:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:27.901 19:01:08 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:27.901 19:01:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:27.901 19:01:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.901 19:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.901 ************************************ 00:06:27.901 START TEST nvmf_target_discovery 00:06:27.901 ************************************ 00:06:27.901 19:01:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:28.176 * Looking for test storage... 00:06:28.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.176 19:01:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:28.177 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:28.178 19:01:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:28.178 19:01:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:30.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:30.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:30.094 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:30.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:30.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:30.095 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:30.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:30.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:06:30.096 00:06:30.096 --- 10.0.0.2 ping statistics --- 00:06:30.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.096 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:30.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:30.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:06:30.096 00:06:30.096 --- 10.0.0.1 ping statistics --- 00:06:30.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:30.096 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3206864 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3206864 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3206864 ']' 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.096 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.372 [2024-07-15 19:01:10.541827] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:30.372 [2024-07-15 19:01:10.541920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.372 [2024-07-15 19:01:10.619108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.372 [2024-07-15 19:01:10.744349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.372 [2024-07-15 19:01:10.744411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.372 [2024-07-15 19:01:10.744428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:30.372 [2024-07-15 19:01:10.744450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:30.372 [2024-07-15 19:01:10.744461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.372 [2024-07-15 19:01:10.744543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.372 [2024-07-15 19:01:10.744610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.372 [2024-07-15 19:01:10.744673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.372 [2024-07-15 19:01:10.744676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 [2024-07-15 19:01:10.904827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 Null1 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 [2024-07-15 19:01:10.945187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 Null2 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 Null3 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 Null4 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.633 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.634 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:30.894 00:06:30.894 Discovery Log Number of Records 6, Generation counter 6 00:06:30.894 =====Discovery Log Entry 0====== 00:06:30.894 trtype: tcp 00:06:30.894 adrfam: ipv4 00:06:30.894 subtype: current discovery subsystem 00:06:30.894 treq: not required 00:06:30.894 portid: 0 00:06:30.894 trsvcid: 4420 00:06:30.894 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:30.894 traddr: 10.0.0.2 00:06:30.894 eflags: explicit discovery connections, duplicate discovery information 00:06:30.894 sectype: none 00:06:30.894 =====Discovery Log Entry 1====== 00:06:30.894 trtype: tcp 00:06:30.894 adrfam: ipv4 00:06:30.894 subtype: nvme subsystem 00:06:30.894 treq: not required 00:06:30.894 portid: 0 00:06:30.894 trsvcid: 4420 00:06:30.894 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:30.894 traddr: 10.0.0.2 00:06:30.894 eflags: none 00:06:30.894 sectype: none 00:06:30.894 =====Discovery Log Entry 2====== 00:06:30.894 trtype: tcp 00:06:30.894 adrfam: ipv4 00:06:30.894 subtype: nvme subsystem 00:06:30.894 treq: not required 00:06:30.894 portid: 0 00:06:30.894 trsvcid: 4420 00:06:30.894 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:30.894 traddr: 10.0.0.2 00:06:30.894 eflags: none 00:06:30.894 sectype: none 00:06:30.894 =====Discovery Log Entry 3====== 00:06:30.894 trtype: tcp 00:06:30.894 adrfam: ipv4 00:06:30.894 subtype: nvme subsystem 00:06:30.894 treq: not required 00:06:30.894 portid: 0 00:06:30.894 trsvcid: 4420 00:06:30.894 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:30.894 traddr: 10.0.0.2 00:06:30.894 eflags: none 00:06:30.894 sectype: none 00:06:30.894 =====Discovery Log Entry 4====== 00:06:30.894 trtype: tcp 00:06:30.894 adrfam: ipv4 00:06:30.894 subtype: nvme subsystem 00:06:30.894 treq: not required 00:06:30.894 portid: 0 00:06:30.894 trsvcid: 4420 00:06:30.894 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:30.894 traddr: 10.0.0.2 00:06:30.894 eflags: none 00:06:30.894 sectype: none 00:06:30.894 =====Discovery Log Entry 5====== 00:06:30.894 trtype: tcp 00:06:30.894 adrfam: ipv4 00:06:30.894 subtype: discovery subsystem referral 00:06:30.894 treq: not required 00:06:30.894 portid: 0 00:06:30.894 trsvcid: 4430 00:06:30.894 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:30.894 traddr: 10.0.0.2 00:06:30.894 eflags: none 00:06:30.894 sectype: none 00:06:30.894 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:30.894 Perform nvmf subsystem discovery via RPC 00:06:30.894 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:30.894 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.894 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.894 [ 00:06:30.894 { 00:06:30.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:30.894 "subtype": "Discovery", 00:06:30.894 "listen_addresses": [ 00:06:30.894 { 00:06:30.894 "trtype": "TCP", 00:06:30.894 "adrfam": "IPv4", 00:06:30.894 "traddr": "10.0.0.2", 00:06:30.894 "trsvcid": "4420" 00:06:30.894 } 00:06:30.894 ], 00:06:30.894 "allow_any_host": true, 00:06:30.894 "hosts": [] 00:06:30.894 }, 00:06:30.894 { 00:06:30.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:30.894 "subtype": "NVMe", 00:06:30.894 "listen_addresses": [ 00:06:30.894 { 00:06:30.894 "trtype": "TCP", 00:06:30.894 "adrfam": "IPv4", 00:06:30.894 "traddr": "10.0.0.2", 00:06:30.894 "trsvcid": "4420" 00:06:30.894 } 00:06:30.894 ], 00:06:30.894 "allow_any_host": true, 00:06:30.894 "hosts": [], 00:06:30.894 "serial_number": "SPDK00000000000001", 00:06:30.894 "model_number": "SPDK bdev Controller", 00:06:30.894 "max_namespaces": 32, 00:06:30.894 "min_cntlid": 1, 00:06:30.894 "max_cntlid": 65519, 00:06:30.894 "namespaces": [ 00:06:30.894 { 00:06:30.894 "nsid": 1, 00:06:30.894 "bdev_name": "Null1", 00:06:30.894 "name": "Null1", 00:06:30.894 "nguid": "F73161275FC0476C9830D97887FEA192", 00:06:30.894 "uuid": "f7316127-5fc0-476c-9830-d97887fea192" 00:06:30.894 } 00:06:30.894 ] 00:06:30.894 }, 00:06:30.894 { 00:06:30.894 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:30.894 "subtype": "NVMe", 00:06:30.894 "listen_addresses": [ 00:06:30.894 { 00:06:30.894 "trtype": "TCP", 00:06:30.894 "adrfam": "IPv4", 00:06:30.894 "traddr": "10.0.0.2", 00:06:30.894 "trsvcid": "4420" 00:06:30.894 } 00:06:30.894 ], 00:06:30.894 "allow_any_host": true, 00:06:30.894 "hosts": [], 00:06:30.894 "serial_number": "SPDK00000000000002", 00:06:30.894 "model_number": "SPDK bdev Controller", 00:06:30.894 "max_namespaces": 32, 00:06:30.894 "min_cntlid": 1, 00:06:30.894 "max_cntlid": 65519, 00:06:30.894 "namespaces": [ 00:06:30.894 { 00:06:30.894 "nsid": 1, 00:06:30.894 "bdev_name": "Null2", 00:06:30.894 "name": "Null2", 00:06:30.894 "nguid": "D7B8F07B216E4FD7985AE8B6D534D502", 00:06:30.894 "uuid": "d7b8f07b-216e-4fd7-985a-e8b6d534d502" 00:06:30.894 } 00:06:30.894 ] 00:06:30.894 }, 00:06:30.894 { 00:06:30.894 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:30.894 "subtype": "NVMe", 00:06:30.894 "listen_addresses": [ 00:06:30.894 { 00:06:30.894 "trtype": "TCP", 00:06:30.894 "adrfam": "IPv4", 00:06:30.894 "traddr": "10.0.0.2", 00:06:30.895 "trsvcid": "4420" 00:06:30.895 } 00:06:30.895 ], 00:06:30.895 "allow_any_host": true, 00:06:30.895 "hosts": [], 00:06:30.895 "serial_number": "SPDK00000000000003", 00:06:30.895 "model_number": "SPDK bdev Controller", 00:06:30.895 "max_namespaces": 32, 00:06:30.895 "min_cntlid": 1, 00:06:30.895 "max_cntlid": 65519, 00:06:30.895 "namespaces": [ 00:06:30.895 { 00:06:30.895 "nsid": 1, 00:06:30.895 "bdev_name": "Null3", 00:06:30.895 "name": "Null3", 00:06:30.895 "nguid": "BF3E4636AE9A490F82B7634E7D0748BB", 00:06:30.895 "uuid": "bf3e4636-ae9a-490f-82b7-634e7d0748bb" 00:06:30.895 } 00:06:30.895 ] 00:06:30.895 }, 00:06:30.895 { 00:06:30.895 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:30.895 "subtype": "NVMe", 00:06:30.895 "listen_addresses": [ 00:06:30.895 { 00:06:30.895 "trtype": "TCP", 00:06:30.895 "adrfam": "IPv4", 00:06:30.895 "traddr": "10.0.0.2", 00:06:30.895 "trsvcid": "4420" 00:06:30.895 } 00:06:30.895 ], 00:06:30.895 "allow_any_host": true, 00:06:30.895 "hosts": [], 00:06:30.895 "serial_number": "SPDK00000000000004", 00:06:30.895 "model_number": "SPDK bdev Controller", 00:06:30.895 "max_namespaces": 32, 00:06:30.895 "min_cntlid": 1, 00:06:30.895 "max_cntlid": 65519, 00:06:30.895 "namespaces": [ 00:06:30.895 { 00:06:30.895 "nsid": 1, 00:06:30.895 "bdev_name": "Null4", 00:06:30.895 "name": "Null4", 00:06:30.895 "nguid": "1B97E87AD0694DB89F6640BFD72943F1", 00:06:30.895 "uuid": "1b97e87a-d069-4db8-9f66-40bfd72943f1" 00:06:30.895 } 00:06:30.895 ] 00:06:30.895 } 00:06:30.895 ] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:30.895 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:30.895 rmmod nvme_tcp 00:06:31.154 rmmod nvme_fabrics 00:06:31.154 rmmod nvme_keyring 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3206864 ']' 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3206864 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3206864 ']' 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3206864 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3206864 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3206864' 00:06:31.154 killing process with pid 3206864 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3206864 00:06:31.154 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3206864 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.415 19:01:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.328 19:01:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:33.328 00:06:33.328 real 0m5.398s 00:06:33.328 user 0m4.308s 00:06:33.328 sys 0m1.822s 00:06:33.328 19:01:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.328 19:01:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:33.328 ************************************ 00:06:33.328 END TEST nvmf_target_discovery 00:06:33.328 ************************************ 00:06:33.328 19:01:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:33.328 19:01:13 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:33.328 19:01:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.328 19:01:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.328 19:01:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.328 ************************************ 00:06:33.328 START TEST nvmf_referrals 00:06:33.328 ************************************ 00:06:33.328 19:01:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:33.587 * Looking for test storage... 00:06:33.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.587 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:33.588 19:01:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.491 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:35.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:35.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:35.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:35.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:35.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:06:35.492 00:06:35.492 --- 10.0.0.2 ping statistics --- 00:06:35.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.492 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:06:35.492 00:06:35.492 --- 10.0.0.1 ping statistics --- 00:06:35.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.492 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3208835 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3208835 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3208835 ']' 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.492 19:01:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:35.492 [2024-07-15 19:01:15.880447] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:35.492 [2024-07-15 19:01:15.880547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.751 [2024-07-15 19:01:15.951944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.751 [2024-07-15 19:01:16.074920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.751 [2024-07-15 19:01:16.074977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.751 [2024-07-15 19:01:16.074994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.751 [2024-07-15 19:01:16.075007] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.751 [2024-07-15 19:01:16.075019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.751 [2024-07-15 19:01:16.075074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.751 [2024-07-15 19:01:16.075104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.751 [2024-07-15 19:01:16.075138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.751 [2024-07-15 19:01:16.075141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.683 [2024-07-15 19:01:16.881273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.683 [2024-07-15 19:01:16.893410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.683 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.684 19:01:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:36.684 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:36.684 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:36.684 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:36.684 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.684 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:36.941 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:36.942 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.199 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.474 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:37.731 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:37.731 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:37.731 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:37.731 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:37.731 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.731 19:01:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:37.731 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:37.988 rmmod nvme_tcp 00:06:37.988 rmmod nvme_fabrics 00:06:37.988 rmmod nvme_keyring 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3208835 ']' 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3208835 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3208835 ']' 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3208835 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3208835 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3208835' 00:06:37.988 killing process with pid 3208835 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3208835 00:06:37.988 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3208835 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.254 19:01:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.788 19:01:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:40.788 00:06:40.788 real 0m6.925s 00:06:40.788 user 0m11.620s 00:06:40.788 sys 0m2.062s 00:06:40.788 19:01:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.788 19:01:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.788 ************************************ 00:06:40.788 END TEST nvmf_referrals 00:06:40.788 ************************************ 00:06:40.788 19:01:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:40.788 19:01:20 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:40.788 19:01:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:40.788 19:01:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.788 19:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.788 ************************************ 00:06:40.788 START TEST nvmf_connect_disconnect 00:06:40.788 ************************************ 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:40.788 * Looking for test storage... 00:06:40.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.788 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:40.789 19:01:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:42.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:42.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:42.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:42.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.732 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:06:42.732 00:06:42.732 --- 10.0.0.2 ping statistics --- 00:06:42.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.732 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:06:42.733 00:06:42.733 --- 10.0.0.1 ping statistics --- 00:06:42.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.733 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3211253 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3211253 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3211253 ']' 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.733 19:01:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 [2024-07-15 19:01:23.017120] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:06:42.733 [2024-07-15 19:01:23.017208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.733 [2024-07-15 19:01:23.083851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.992 [2024-07-15 19:01:23.195644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.992 [2024-07-15 19:01:23.195704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.992 [2024-07-15 19:01:23.195717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.992 [2024-07-15 19:01:23.195728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.992 [2024-07-15 19:01:23.195737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.992 [2024-07-15 19:01:23.195820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.992 [2024-07-15 19:01:23.195898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.992 [2024-07-15 19:01:23.195959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.992 [2024-07-15 19:01:23.195963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.992 [2024-07-15 19:01:23.349712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.992 [2024-07-15 19:01:23.407192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:42.992 19:01:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:46.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:51.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:53.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:57.201 rmmod nvme_tcp 00:06:57.201 rmmod nvme_fabrics 00:06:57.201 rmmod nvme_keyring 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3211253 ']' 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3211253 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3211253 ']' 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3211253 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3211253 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3211253' 00:06:57.201 killing process with pid 3211253 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3211253 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3211253 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.201 19:01:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.105 19:01:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:59.105 00:06:59.105 real 0m18.734s 00:06:59.105 user 0m56.160s 00:06:59.105 sys 0m3.315s 00:06:59.105 19:01:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.105 19:01:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:59.105 ************************************ 00:06:59.105 END TEST nvmf_connect_disconnect 00:06:59.105 ************************************ 00:06:59.105 19:01:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:59.105 19:01:39 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:59.105 19:01:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:59.105 19:01:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.105 19:01:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:59.105 ************************************ 00:06:59.105 START TEST nvmf_multitarget 00:06:59.105 ************************************ 00:06:59.105 19:01:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:59.363 * Looking for test storage... 00:06:59.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.363 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:59.364 19:01:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:01.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:01.264 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:01.264 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:01.264 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.264 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:07:01.264 00:07:01.264 --- 10.0.0.2 ping statistics --- 00:07:01.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.264 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:01.523 00:07:01.523 --- 10.0.0.1 ping statistics --- 00:07:01.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.523 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3214899 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3214899 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3214899 ']' 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.523 19:01:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:01.523 [2024-07-15 19:01:41.780459] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:01.523 [2024-07-15 19:01:41.780558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.523 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.523 [2024-07-15 19:01:41.850451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.794 [2024-07-15 19:01:41.971247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.794 [2024-07-15 19:01:41.971317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.794 [2024-07-15 19:01:41.971334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.794 [2024-07-15 19:01:41.971347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.794 [2024-07-15 19:01:41.971360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.794 [2024-07-15 19:01:41.971445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.794 [2024-07-15 19:01:41.971505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.794 [2024-07-15 19:01:41.971555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.794 [2024-07-15 19:01:41.971559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.360 19:01:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:02.361 19:01:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:02.645 19:01:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:02.645 19:01:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:02.645 "nvmf_tgt_1" 00:07:02.645 19:01:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:02.902 "nvmf_tgt_2" 00:07:02.902 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:02.902 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:02.902 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:02.902 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:02.902 true 00:07:02.902 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:03.159 true 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.159 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.159 rmmod nvme_tcp 00:07:03.159 rmmod nvme_fabrics 00:07:03.419 rmmod nvme_keyring 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3214899 ']' 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3214899 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3214899 ']' 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3214899 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3214899 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3214899' 00:07:03.419 killing process with pid 3214899 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3214899 00:07:03.419 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3214899 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.678 19:01:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.583 19:01:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.583 00:07:05.583 real 0m6.451s 00:07:05.583 user 0m9.333s 00:07:05.583 sys 0m1.948s 00:07:05.583 19:01:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.583 19:01:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:05.583 ************************************ 00:07:05.583 END TEST nvmf_multitarget 00:07:05.583 ************************************ 00:07:05.583 19:01:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:05.583 19:01:45 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:05.583 19:01:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.583 19:01:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.583 19:01:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.583 ************************************ 00:07:05.583 START TEST nvmf_rpc 00:07:05.583 ************************************ 00:07:05.583 19:01:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:05.842 * Looking for test storage... 00:07:05.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.842 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.843 19:01:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.766 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.767 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.767 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.767 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.767 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.767 19:01:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.767 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:07:07.767 00:07:07.767 --- 10.0.0.2 ping statistics --- 00:07:07.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.768 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:07:07.768 00:07:07.768 --- 10.0.0.1 ping statistics --- 00:07:07.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.768 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.768 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3217123 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3217123 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3217123 ']' 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.769 19:01:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.040 [2024-07-15 19:01:48.211513] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:08.040 [2024-07-15 19:01:48.211596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.040 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.040 [2024-07-15 19:01:48.281749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.040 [2024-07-15 19:01:48.402034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.040 [2024-07-15 19:01:48.402084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.040 [2024-07-15 19:01:48.402099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.040 [2024-07-15 19:01:48.402111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.040 [2024-07-15 19:01:48.402121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.040 [2024-07-15 19:01:48.402193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.040 [2024-07-15 19:01:48.403897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.040 [2024-07-15 19:01:48.403938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.040 [2024-07-15 19:01:48.403942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.974 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:08.974 "tick_rate": 2700000000, 00:07:08.974 "poll_groups": [ 00:07:08.974 { 00:07:08.974 "name": "nvmf_tgt_poll_group_000", 00:07:08.974 "admin_qpairs": 0, 00:07:08.974 "io_qpairs": 0, 00:07:08.974 "current_admin_qpairs": 0, 00:07:08.974 "current_io_qpairs": 0, 00:07:08.974 "pending_bdev_io": 0, 00:07:08.974 "completed_nvme_io": 0, 00:07:08.974 "transports": [] 00:07:08.974 }, 00:07:08.974 { 00:07:08.974 "name": "nvmf_tgt_poll_group_001", 00:07:08.974 "admin_qpairs": 0, 00:07:08.974 "io_qpairs": 0, 00:07:08.974 "current_admin_qpairs": 0, 00:07:08.974 "current_io_qpairs": 0, 00:07:08.975 "pending_bdev_io": 0, 00:07:08.975 "completed_nvme_io": 0, 00:07:08.975 "transports": [] 00:07:08.975 }, 00:07:08.975 { 00:07:08.975 "name": "nvmf_tgt_poll_group_002", 00:07:08.975 "admin_qpairs": 0, 00:07:08.975 "io_qpairs": 0, 00:07:08.975 "current_admin_qpairs": 0, 00:07:08.975 "current_io_qpairs": 0, 00:07:08.975 "pending_bdev_io": 0, 00:07:08.975 "completed_nvme_io": 0, 00:07:08.975 "transports": [] 00:07:08.975 }, 00:07:08.975 { 00:07:08.975 "name": "nvmf_tgt_poll_group_003", 00:07:08.975 "admin_qpairs": 0, 00:07:08.975 "io_qpairs": 0, 00:07:08.975 "current_admin_qpairs": 0, 00:07:08.975 "current_io_qpairs": 0, 00:07:08.975 "pending_bdev_io": 0, 00:07:08.975 "completed_nvme_io": 0, 00:07:08.975 "transports": [] 00:07:08.975 } 00:07:08.975 ] 00:07:08.975 }' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.975 [2024-07-15 19:01:49.319356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:08.975 "tick_rate": 2700000000, 00:07:08.975 "poll_groups": [ 00:07:08.975 { 00:07:08.975 "name": "nvmf_tgt_poll_group_000", 00:07:08.975 "admin_qpairs": 0, 00:07:08.975 "io_qpairs": 0, 00:07:08.975 "current_admin_qpairs": 0, 00:07:08.975 "current_io_qpairs": 0, 00:07:08.975 "pending_bdev_io": 0, 00:07:08.975 "completed_nvme_io": 0, 00:07:08.975 "transports": [ 00:07:08.975 { 00:07:08.975 "trtype": "TCP" 00:07:08.975 } 00:07:08.975 ] 00:07:08.975 }, 00:07:08.975 { 00:07:08.975 "name": "nvmf_tgt_poll_group_001", 00:07:08.975 "admin_qpairs": 0, 00:07:08.975 "io_qpairs": 0, 00:07:08.975 "current_admin_qpairs": 0, 00:07:08.975 "current_io_qpairs": 0, 00:07:08.975 "pending_bdev_io": 0, 00:07:08.975 "completed_nvme_io": 0, 00:07:08.975 "transports": [ 00:07:08.975 { 00:07:08.975 "trtype": "TCP" 00:07:08.975 } 00:07:08.975 ] 00:07:08.975 }, 00:07:08.975 { 00:07:08.975 "name": "nvmf_tgt_poll_group_002", 00:07:08.975 "admin_qpairs": 0, 00:07:08.975 "io_qpairs": 0, 00:07:08.975 "current_admin_qpairs": 0, 00:07:08.975 "current_io_qpairs": 0, 00:07:08.975 "pending_bdev_io": 0, 00:07:08.975 "completed_nvme_io": 0, 00:07:08.975 "transports": [ 00:07:08.975 { 00:07:08.975 "trtype": "TCP" 00:07:08.975 } 00:07:08.975 ] 00:07:08.975 }, 00:07:08.975 { 00:07:08.975 "name": "nvmf_tgt_poll_group_003", 00:07:08.975 "admin_qpairs": 0, 00:07:08.975 "io_qpairs": 0, 00:07:08.975 "current_admin_qpairs": 0, 00:07:08.975 "current_io_qpairs": 0, 00:07:08.975 "pending_bdev_io": 0, 00:07:08.975 "completed_nvme_io": 0, 00:07:08.975 "transports": [ 00:07:08.975 { 00:07:08.975 "trtype": "TCP" 00:07:08.975 } 00:07:08.975 ] 00:07:08.975 } 00:07:08.975 ] 00:07:08.975 }' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:08.975 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 Malloc1 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 [2024-07-15 19:01:49.476952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:09.233 [2024-07-15 19:01:49.499470] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:09.233 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:09.233 could not add new controller: failed to write to nvme-fabrics device 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.233 19:01:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:09.801 19:01:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:09.801 19:01:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:09.801 19:01:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:09.801 19:01:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:09.801 19:01:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:11.710 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:11.710 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:11.710 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.969 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:11.969 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.969 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:11.969 19:01:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.969 19:01:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.969 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.970 [2024-07-15 19:01:52.279847] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:11.970 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:11.970 could not add new controller: failed to write to nvme-fabrics device 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.970 19:01:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.538 19:01:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.538 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:12.538 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.538 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:12.538 19:01:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.073 19:01:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.073 [2024-07-15 19:01:55.004463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.073 19:01:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:15.331 19:01:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:15.331 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:15.331 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:15.331 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:15.331 19:01:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:17.230 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:17.230 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:17.230 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.230 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:17.230 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.230 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:17.230 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.499 [2024-07-15 19:01:57.777576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.499 19:01:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.088 19:01:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.088 19:01:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:18.088 19:01:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.088 19:01:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:18.088 19:01:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.672 [2024-07-15 19:02:00.587761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.672 19:02:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.930 19:02:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.930 19:02:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.930 19:02:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.930 19:02:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.930 19:02:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:23.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 [2024-07-15 19:02:03.402743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.462 19:02:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.721 19:02:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.721 19:02:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:23.721 19:02:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.721 19:02:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:23.721 19:02:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:26.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 [2024-07-15 19:02:06.212575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.251 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.511 19:02:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.511 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:26.511 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.511 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:26.511 19:02:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 [2024-07-15 19:02:09.016922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 [2024-07-15 19:02:09.065000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 [2024-07-15 19:02:09.113209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.095 [2024-07-15 19:02:09.161356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.095 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 [2024-07-15 19:02:09.209504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:29.096 "tick_rate": 2700000000, 00:07:29.096 "poll_groups": [ 00:07:29.096 { 00:07:29.096 "name": "nvmf_tgt_poll_group_000", 00:07:29.096 "admin_qpairs": 2, 00:07:29.096 "io_qpairs": 84, 00:07:29.096 "current_admin_qpairs": 0, 00:07:29.096 "current_io_qpairs": 0, 00:07:29.096 "pending_bdev_io": 0, 00:07:29.096 "completed_nvme_io": 183, 00:07:29.096 "transports": [ 00:07:29.096 { 00:07:29.096 "trtype": "TCP" 00:07:29.096 } 00:07:29.096 ] 00:07:29.096 }, 00:07:29.096 { 00:07:29.096 "name": "nvmf_tgt_poll_group_001", 00:07:29.096 "admin_qpairs": 2, 00:07:29.096 "io_qpairs": 84, 00:07:29.096 "current_admin_qpairs": 0, 00:07:29.096 "current_io_qpairs": 0, 00:07:29.096 "pending_bdev_io": 0, 00:07:29.096 "completed_nvme_io": 205, 00:07:29.096 "transports": [ 00:07:29.096 { 00:07:29.096 "trtype": "TCP" 00:07:29.096 } 00:07:29.096 ] 00:07:29.096 }, 00:07:29.096 { 00:07:29.096 "name": "nvmf_tgt_poll_group_002", 00:07:29.096 "admin_qpairs": 1, 00:07:29.096 "io_qpairs": 84, 00:07:29.096 "current_admin_qpairs": 0, 00:07:29.096 "current_io_qpairs": 0, 00:07:29.096 "pending_bdev_io": 0, 00:07:29.096 "completed_nvme_io": 203, 00:07:29.096 "transports": [ 00:07:29.096 { 00:07:29.096 "trtype": "TCP" 00:07:29.096 } 00:07:29.096 ] 00:07:29.096 }, 00:07:29.096 { 00:07:29.096 "name": "nvmf_tgt_poll_group_003", 00:07:29.096 "admin_qpairs": 2, 00:07:29.096 "io_qpairs": 84, 00:07:29.096 "current_admin_qpairs": 0, 00:07:29.096 "current_io_qpairs": 0, 00:07:29.096 "pending_bdev_io": 0, 00:07:29.096 "completed_nvme_io": 95, 00:07:29.096 "transports": [ 00:07:29.096 { 00:07:29.096 "trtype": "TCP" 00:07:29.096 } 00:07:29.096 ] 00:07:29.096 } 00:07:29.096 ] 00:07:29.096 }' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:29.096 rmmod nvme_tcp 00:07:29.096 rmmod nvme_fabrics 00:07:29.096 rmmod nvme_keyring 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3217123 ']' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3217123 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3217123 ']' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3217123 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3217123 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3217123' 00:07:29.096 killing process with pid 3217123 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3217123 00:07:29.096 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3217123 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.354 19:02:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.890 19:02:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:31.890 00:07:31.890 real 0m25.759s 00:07:31.890 user 1m24.586s 00:07:31.890 sys 0m3.949s 00:07:31.890 19:02:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.890 19:02:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.890 ************************************ 00:07:31.890 END TEST nvmf_rpc 00:07:31.890 ************************************ 00:07:31.890 19:02:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:31.890 19:02:11 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:31.890 19:02:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.890 19:02:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.890 19:02:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.890 ************************************ 00:07:31.890 START TEST nvmf_invalid 00:07:31.890 ************************************ 00:07:31.890 19:02:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:31.890 * Looking for test storage... 00:07:31.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.890 19:02:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.890 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:31.890 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.890 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.890 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:31.891 19:02:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:33.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:33.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.791 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:33.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:33.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:33.792 19:02:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:33.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:07:33.792 00:07:33.792 --- 10.0.0.2 ping statistics --- 00:07:33.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.792 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:07:33.792 00:07:33.792 --- 10.0.0.1 ping statistics --- 00:07:33.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.792 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3221637 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3221637 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3221637 ']' 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.792 19:02:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:33.792 [2024-07-15 19:02:14.120182] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:33.792 [2024-07-15 19:02:14.120282] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.792 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.792 [2024-07-15 19:02:14.190767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.050 [2024-07-15 19:02:14.315929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.050 [2024-07-15 19:02:14.315982] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.050 [2024-07-15 19:02:14.315996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.050 [2024-07-15 19:02:14.316008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.050 [2024-07-15 19:02:14.316018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.050 [2024-07-15 19:02:14.316078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.050 [2024-07-15 19:02:14.316105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.050 [2024-07-15 19:02:14.316165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.050 [2024-07-15 19:02:14.316167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:34.981 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28566 00:07:35.239 [2024-07-15 19:02:15.413903] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:35.239 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:35.239 { 00:07:35.239 "nqn": "nqn.2016-06.io.spdk:cnode28566", 00:07:35.239 "tgt_name": "foobar", 00:07:35.239 "method": "nvmf_create_subsystem", 00:07:35.239 "req_id": 1 00:07:35.239 } 00:07:35.239 Got JSON-RPC error response 00:07:35.239 response: 00:07:35.239 { 00:07:35.239 "code": -32603, 00:07:35.239 "message": "Unable to find target foobar" 00:07:35.239 }' 00:07:35.239 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:35.239 { 00:07:35.239 "nqn": "nqn.2016-06.io.spdk:cnode28566", 00:07:35.239 "tgt_name": "foobar", 00:07:35.239 "method": "nvmf_create_subsystem", 00:07:35.239 "req_id": 1 00:07:35.239 } 00:07:35.239 Got JSON-RPC error response 00:07:35.239 response: 00:07:35.239 { 00:07:35.239 "code": -32603, 00:07:35.239 "message": "Unable to find target foobar" 00:07:35.239 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:35.239 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:35.239 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13737 00:07:35.496 [2024-07-15 19:02:15.670725] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13737: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:35.496 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:35.496 { 00:07:35.496 "nqn": "nqn.2016-06.io.spdk:cnode13737", 00:07:35.496 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:35.496 "method": "nvmf_create_subsystem", 00:07:35.496 "req_id": 1 00:07:35.496 } 00:07:35.496 Got JSON-RPC error response 00:07:35.496 response: 00:07:35.496 { 00:07:35.496 "code": -32602, 00:07:35.496 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:35.496 }' 00:07:35.496 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:35.496 { 00:07:35.496 "nqn": "nqn.2016-06.io.spdk:cnode13737", 00:07:35.496 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:35.496 "method": "nvmf_create_subsystem", 00:07:35.496 "req_id": 1 00:07:35.496 } 00:07:35.496 Got JSON-RPC error response 00:07:35.496 response: 00:07:35.496 { 00:07:35.496 "code": -32602, 00:07:35.496 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:35.496 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:35.496 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:35.496 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7129 00:07:35.496 [2024-07-15 19:02:15.923536] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7129: invalid model number 'SPDK_Controller' 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:35.754 { 00:07:35.754 "nqn": "nqn.2016-06.io.spdk:cnode7129", 00:07:35.754 "model_number": "SPDK_Controller\u001f", 00:07:35.754 "method": "nvmf_create_subsystem", 00:07:35.754 "req_id": 1 00:07:35.754 } 00:07:35.754 Got JSON-RPC error response 00:07:35.754 response: 00:07:35.754 { 00:07:35.754 "code": -32602, 00:07:35.754 "message": "Invalid MN SPDK_Controller\u001f" 00:07:35.754 }' 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:35.754 { 00:07:35.754 "nqn": "nqn.2016-06.io.spdk:cnode7129", 00:07:35.754 "model_number": "SPDK_Controller\u001f", 00:07:35.754 "method": "nvmf_create_subsystem", 00:07:35.754 "req_id": 1 00:07:35.754 } 00:07:35.754 Got JSON-RPC error response 00:07:35.754 response: 00:07:35.754 { 00:07:35.754 "code": -32602, 00:07:35.754 "message": "Invalid MN SPDK_Controller\u001f" 00:07:35.754 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.754 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z*cNtyU`bSEfHL_R`Km|X' 00:07:35.755 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'z*cNtyU`bSEfHL_R`Km|X' nqn.2016-06.io.spdk:cnode9334 00:07:36.014 [2024-07-15 19:02:16.236556] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9334: invalid serial number 'z*cNtyU`bSEfHL_R`Km|X' 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:36.014 { 00:07:36.014 "nqn": "nqn.2016-06.io.spdk:cnode9334", 00:07:36.014 "serial_number": "z*cNtyU`bSEfHL_R`Km|X", 00:07:36.014 "method": "nvmf_create_subsystem", 00:07:36.014 "req_id": 1 00:07:36.014 } 00:07:36.014 Got JSON-RPC error response 00:07:36.014 response: 00:07:36.014 { 00:07:36.014 "code": -32602, 00:07:36.014 "message": "Invalid SN z*cNtyU`bSEfHL_R`Km|X" 00:07:36.014 }' 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:36.014 { 00:07:36.014 "nqn": "nqn.2016-06.io.spdk:cnode9334", 00:07:36.014 "serial_number": "z*cNtyU`bSEfHL_R`Km|X", 00:07:36.014 "method": "nvmf_create_subsystem", 00:07:36.014 "req_id": 1 00:07:36.014 } 00:07:36.014 Got JSON-RPC error response 00:07:36.014 response: 00:07:36.014 { 00:07:36.014 "code": -32602, 00:07:36.014 "message": "Invalid SN z*cNtyU`bSEfHL_R`Km|X" 00:07:36.014 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:36.014 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'TL6O&Q\\(;uKt}-u.U24@Sem$}.xYOZ]Bbp`)4j5' 00:07:36.015 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'TL6O&Q\\(;uKt}-u.U24@Sem$}.xYOZ]Bbp`)4j5' nqn.2016-06.io.spdk:cnode11157 00:07:36.273 [2024-07-15 19:02:16.637921] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11157: invalid model number 'TL6O&Q\\(;uKt}-u.U24@Sem$}.xYOZ]Bbp`)4j5' 00:07:36.273 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:36.273 { 00:07:36.273 "nqn": "nqn.2016-06.io.spdk:cnode11157", 00:07:36.273 "model_number": "TL6O&Q\\\\(;uKt}-u.U24@Sem$}.xYOZ]Bbp`)4\u007fj5", 00:07:36.273 "method": "nvmf_create_subsystem", 00:07:36.273 "req_id": 1 00:07:36.273 } 00:07:36.273 Got JSON-RPC error response 00:07:36.273 response: 00:07:36.273 { 00:07:36.273 "code": -32602, 00:07:36.273 "message": "Invalid MN TL6O&Q\\\\(;uKt}-u.U24@Sem$}.xYOZ]Bbp`)4\u007fj5" 00:07:36.273 }' 00:07:36.273 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:36.273 { 00:07:36.273 "nqn": "nqn.2016-06.io.spdk:cnode11157", 00:07:36.273 "model_number": "TL6O&Q\\\\(;uKt}-u.U24@Sem$}.xYOZ]Bbp`)4\u007fj5", 00:07:36.273 "method": "nvmf_create_subsystem", 00:07:36.273 "req_id": 1 00:07:36.273 } 00:07:36.273 Got JSON-RPC error response 00:07:36.273 response: 00:07:36.273 { 00:07:36.273 "code": -32602, 00:07:36.273 "message": "Invalid MN TL6O&Q\\\\(;uKt}-u.U24@Sem$}.xYOZ]Bbp`)4\u007fj5" 00:07:36.273 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:36.273 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:36.540 [2024-07-15 19:02:16.886796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.540 19:02:16 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:36.806 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:36.806 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:36.806 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:36.806 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:36.806 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:37.114 [2024-07-15 19:02:17.396451] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:37.114 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:37.114 { 00:07:37.114 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:37.114 "listen_address": { 00:07:37.114 "trtype": "tcp", 00:07:37.114 "traddr": "", 00:07:37.114 "trsvcid": "4421" 00:07:37.114 }, 00:07:37.114 "method": "nvmf_subsystem_remove_listener", 00:07:37.114 "req_id": 1 00:07:37.114 } 00:07:37.114 Got JSON-RPC error response 00:07:37.114 response: 00:07:37.114 { 00:07:37.114 "code": -32602, 00:07:37.114 "message": "Invalid parameters" 00:07:37.114 }' 00:07:37.114 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:37.114 { 00:07:37.114 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:37.114 "listen_address": { 00:07:37.114 "trtype": "tcp", 00:07:37.114 "traddr": "", 00:07:37.114 "trsvcid": "4421" 00:07:37.114 }, 00:07:37.114 "method": "nvmf_subsystem_remove_listener", 00:07:37.114 "req_id": 1 00:07:37.114 } 00:07:37.114 Got JSON-RPC error response 00:07:37.114 response: 00:07:37.114 { 00:07:37.114 "code": -32602, 00:07:37.114 "message": "Invalid parameters" 00:07:37.114 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:37.114 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25627 -i 0 00:07:37.372 [2024-07-15 19:02:17.649289] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25627: invalid cntlid range [0-65519] 00:07:37.372 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:37.372 { 00:07:37.372 "nqn": "nqn.2016-06.io.spdk:cnode25627", 00:07:37.372 "min_cntlid": 0, 00:07:37.372 "method": "nvmf_create_subsystem", 00:07:37.372 "req_id": 1 00:07:37.372 } 00:07:37.372 Got JSON-RPC error response 00:07:37.372 response: 00:07:37.372 { 00:07:37.372 "code": -32602, 00:07:37.372 "message": "Invalid cntlid range [0-65519]" 00:07:37.372 }' 00:07:37.372 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:37.372 { 00:07:37.372 "nqn": "nqn.2016-06.io.spdk:cnode25627", 00:07:37.372 "min_cntlid": 0, 00:07:37.372 "method": "nvmf_create_subsystem", 00:07:37.372 "req_id": 1 00:07:37.372 } 00:07:37.372 Got JSON-RPC error response 00:07:37.372 response: 00:07:37.372 { 00:07:37.372 "code": -32602, 00:07:37.372 "message": "Invalid cntlid range [0-65519]" 00:07:37.372 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:37.372 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10319 -i 65520 00:07:37.630 [2024-07-15 19:02:17.894076] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10319: invalid cntlid range [65520-65519] 00:07:37.630 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:37.630 { 00:07:37.630 "nqn": "nqn.2016-06.io.spdk:cnode10319", 00:07:37.630 "min_cntlid": 65520, 00:07:37.630 "method": "nvmf_create_subsystem", 00:07:37.630 "req_id": 1 00:07:37.630 } 00:07:37.630 Got JSON-RPC error response 00:07:37.630 response: 00:07:37.630 { 00:07:37.630 "code": -32602, 00:07:37.630 "message": "Invalid cntlid range [65520-65519]" 00:07:37.630 }' 00:07:37.630 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:37.630 { 00:07:37.630 "nqn": "nqn.2016-06.io.spdk:cnode10319", 00:07:37.630 "min_cntlid": 65520, 00:07:37.630 "method": "nvmf_create_subsystem", 00:07:37.630 "req_id": 1 00:07:37.630 } 00:07:37.630 Got JSON-RPC error response 00:07:37.630 response: 00:07:37.630 { 00:07:37.630 "code": -32602, 00:07:37.630 "message": "Invalid cntlid range [65520-65519]" 00:07:37.630 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:37.630 19:02:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8063 -I 0 00:07:37.887 [2024-07-15 19:02:18.138884] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8063: invalid cntlid range [1-0] 00:07:37.887 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:37.887 { 00:07:37.887 "nqn": "nqn.2016-06.io.spdk:cnode8063", 00:07:37.887 "max_cntlid": 0, 00:07:37.887 "method": "nvmf_create_subsystem", 00:07:37.887 "req_id": 1 00:07:37.887 } 00:07:37.887 Got JSON-RPC error response 00:07:37.887 response: 00:07:37.887 { 00:07:37.887 "code": -32602, 00:07:37.887 "message": "Invalid cntlid range [1-0]" 00:07:37.887 }' 00:07:37.887 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:37.887 { 00:07:37.887 "nqn": "nqn.2016-06.io.spdk:cnode8063", 00:07:37.887 "max_cntlid": 0, 00:07:37.887 "method": "nvmf_create_subsystem", 00:07:37.887 "req_id": 1 00:07:37.887 } 00:07:37.887 Got JSON-RPC error response 00:07:37.887 response: 00:07:37.887 { 00:07:37.887 "code": -32602, 00:07:37.887 "message": "Invalid cntlid range [1-0]" 00:07:37.887 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:37.887 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27838 -I 65520 00:07:38.144 [2024-07-15 19:02:18.383681] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27838: invalid cntlid range [1-65520] 00:07:38.144 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:38.144 { 00:07:38.144 "nqn": "nqn.2016-06.io.spdk:cnode27838", 00:07:38.144 "max_cntlid": 65520, 00:07:38.144 "method": "nvmf_create_subsystem", 00:07:38.144 "req_id": 1 00:07:38.144 } 00:07:38.144 Got JSON-RPC error response 00:07:38.144 response: 00:07:38.144 { 00:07:38.144 "code": -32602, 00:07:38.144 "message": "Invalid cntlid range [1-65520]" 00:07:38.144 }' 00:07:38.144 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:38.144 { 00:07:38.144 "nqn": "nqn.2016-06.io.spdk:cnode27838", 00:07:38.144 "max_cntlid": 65520, 00:07:38.144 "method": "nvmf_create_subsystem", 00:07:38.144 "req_id": 1 00:07:38.144 } 00:07:38.144 Got JSON-RPC error response 00:07:38.144 response: 00:07:38.144 { 00:07:38.144 "code": -32602, 00:07:38.144 "message": "Invalid cntlid range [1-65520]" 00:07:38.144 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:38.144 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19338 -i 6 -I 5 00:07:38.415 [2024-07-15 19:02:18.632551] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19338: invalid cntlid range [6-5] 00:07:38.415 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:38.415 { 00:07:38.415 "nqn": "nqn.2016-06.io.spdk:cnode19338", 00:07:38.415 "min_cntlid": 6, 00:07:38.415 "max_cntlid": 5, 00:07:38.415 "method": "nvmf_create_subsystem", 00:07:38.415 "req_id": 1 00:07:38.415 } 00:07:38.415 Got JSON-RPC error response 00:07:38.415 response: 00:07:38.415 { 00:07:38.415 "code": -32602, 00:07:38.415 "message": "Invalid cntlid range [6-5]" 00:07:38.415 }' 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:38.416 { 00:07:38.416 "nqn": "nqn.2016-06.io.spdk:cnode19338", 00:07:38.416 "min_cntlid": 6, 00:07:38.416 "max_cntlid": 5, 00:07:38.416 "method": "nvmf_create_subsystem", 00:07:38.416 "req_id": 1 00:07:38.416 } 00:07:38.416 Got JSON-RPC error response 00:07:38.416 response: 00:07:38.416 { 00:07:38.416 "code": -32602, 00:07:38.416 "message": "Invalid cntlid range [6-5]" 00:07:38.416 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:38.416 { 00:07:38.416 "name": "foobar", 00:07:38.416 "method": "nvmf_delete_target", 00:07:38.416 "req_id": 1 00:07:38.416 } 00:07:38.416 Got JSON-RPC error response 00:07:38.416 response: 00:07:38.416 { 00:07:38.416 "code": -32602, 00:07:38.416 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:38.416 }' 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:38.416 { 00:07:38.416 "name": "foobar", 00:07:38.416 "method": "nvmf_delete_target", 00:07:38.416 "req_id": 1 00:07:38.416 } 00:07:38.416 Got JSON-RPC error response 00:07:38.416 response: 00:07:38.416 { 00:07:38.416 "code": -32602, 00:07:38.416 "message": "The specified target doesn't exist, cannot delete it." 00:07:38.416 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:38.416 rmmod nvme_tcp 00:07:38.416 rmmod nvme_fabrics 00:07:38.416 rmmod nvme_keyring 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3221637 ']' 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3221637 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3221637 ']' 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3221637 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:38.416 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3221637 00:07:38.677 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:38.677 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:38.677 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3221637' 00:07:38.677 killing process with pid 3221637 00:07:38.677 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3221637 00:07:38.677 19:02:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3221637 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.937 19:02:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.843 19:02:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.843 00:07:40.843 real 0m9.355s 00:07:40.843 user 0m22.952s 00:07:40.843 sys 0m2.500s 00:07:40.843 19:02:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.843 19:02:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:40.843 ************************************ 00:07:40.843 END TEST nvmf_invalid 00:07:40.843 ************************************ 00:07:40.843 19:02:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:40.843 19:02:21 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:40.843 19:02:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.843 19:02:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.843 19:02:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.843 ************************************ 00:07:40.843 START TEST nvmf_abort 00:07:40.843 ************************************ 00:07:40.843 19:02:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:41.103 * Looking for test storage... 00:07:41.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.103 19:02:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:43.011 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:43.011 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:43.011 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:43.011 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.011 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:07:43.012 00:07:43.012 --- 10.0.0.2 ping statistics --- 00:07:43.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.012 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:43.012 00:07:43.012 --- 10.0.0.1 ping statistics --- 00:07:43.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.012 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3224399 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3224399 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3224399 ']' 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.012 19:02:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:43.272 [2024-07-15 19:02:23.459339] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:43.272 [2024-07-15 19:02:23.459426] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.272 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.272 [2024-07-15 19:02:23.528552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.272 [2024-07-15 19:02:23.648725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.272 [2024-07-15 19:02:23.648787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.272 [2024-07-15 19:02:23.648803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.272 [2024-07-15 19:02:23.648817] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.272 [2024-07-15 19:02:23.648828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.272 [2024-07-15 19:02:23.648893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.272 [2024-07-15 19:02:23.648951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.272 [2024-07-15 19:02:23.648955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 [2024-07-15 19:02:24.431545] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 Malloc0 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 Delay0 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 [2024-07-15 19:02:24.501054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.207 19:02:24 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:44.207 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.207 [2024-07-15 19:02:24.608070] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:46.746 Initializing NVMe Controllers 00:07:46.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:46.746 controller IO queue size 128 less than required 00:07:46.746 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:46.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:46.746 Initialization complete. Launching workers. 00:07:46.746 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31889 00:07:46.746 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31950, failed to submit 62 00:07:46.746 success 31893, unsuccess 57, failed 0 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.746 rmmod nvme_tcp 00:07:46.746 rmmod nvme_fabrics 00:07:46.746 rmmod nvme_keyring 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3224399 ']' 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3224399 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3224399 ']' 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3224399 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3224399 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3224399' 00:07:46.746 killing process with pid 3224399 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3224399 00:07:46.746 19:02:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3224399 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.746 19:02:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.280 19:02:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.281 00:07:49.281 real 0m7.972s 00:07:49.281 user 0m12.858s 00:07:49.281 sys 0m2.586s 00:07:49.281 19:02:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.281 19:02:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:49.281 ************************************ 00:07:49.281 END TEST nvmf_abort 00:07:49.281 ************************************ 00:07:49.281 19:02:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:49.281 19:02:29 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:49.281 19:02:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:49.281 19:02:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.281 19:02:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.281 ************************************ 00:07:49.281 START TEST nvmf_ns_hotplug_stress 00:07:49.281 ************************************ 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:49.281 * Looking for test storage... 00:07:49.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.281 19:02:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:51.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:51.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:51.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:51.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.183 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:51.184 00:07:51.184 --- 10.0.0.2 ping statistics --- 00:07:51.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.184 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:51.184 00:07:51.184 --- 10.0.0.1 ping statistics --- 00:07:51.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.184 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3226754 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3226754 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3226754 ']' 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.184 19:02:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:51.184 [2024-07-15 19:02:31.467194] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:07:51.184 [2024-07-15 19:02:31.467277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.184 [2024-07-15 19:02:31.536839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.442 [2024-07-15 19:02:31.657323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.442 [2024-07-15 19:02:31.657385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.442 [2024-07-15 19:02:31.657401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.442 [2024-07-15 19:02:31.657414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.442 [2024-07-15 19:02:31.657426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.442 [2024-07-15 19:02:31.657513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.442 [2024-07-15 19:02:31.657567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.442 [2024-07-15 19:02:31.657570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.021 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.021 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:52.021 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.021 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.021 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:52.279 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.279 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:52.279 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:52.537 [2024-07-15 19:02:32.744989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.537 19:02:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:52.795 19:02:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.052 [2024-07-15 19:02:33.307884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.052 19:02:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.309 19:02:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:53.569 Malloc0 00:07:53.569 19:02:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:53.850 Delay0 00:07:53.850 19:02:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.109 19:02:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:54.367 NULL1 00:07:54.367 19:02:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:54.624 19:02:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3227183 00:07:54.624 19:02:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:54.624 19:02:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:07:54.624 19:02:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.624 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.000 Read completed with error (sct=0, sc=11) 00:07:56.000 19:02:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.000 19:02:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:56.000 19:02:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:56.258 true 00:07:56.258 19:02:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:07:56.258 19:02:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.192 19:02:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.457 19:02:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:57.457 19:02:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:57.714 true 00:07:57.714 19:02:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:07:57.714 19:02:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.971 19:02:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.229 19:02:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:58.229 19:02:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:58.486 true 00:07:58.486 19:02:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:07:58.486 19:02:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.744 19:02:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.001 19:02:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:59.001 19:02:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:59.259 true 00:07:59.259 19:02:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:07:59.259 19:02:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.217 19:02:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.474 19:02:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:00.474 19:02:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:00.730 true 00:08:00.730 19:02:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:00.730 19:02:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.986 19:02:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.243 19:02:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:01.243 19:02:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:01.499 true 00:08:01.499 19:02:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:01.499 19:02:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.427 19:02:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.683 19:02:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:02.683 19:02:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:02.951 true 00:08:02.951 19:02:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:02.951 19:02:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.211 19:02:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.467 19:02:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:03.467 19:02:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:03.467 true 00:08:03.724 19:02:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:03.724 19:02:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.724 19:02:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.981 19:02:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:03.981 19:02:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:04.237 true 00:08:04.237 19:02:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:04.237 19:02:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.607 19:02:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.607 19:02:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:05.607 19:02:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:05.864 true 00:08:05.864 19:02:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:05.864 19:02:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.796 19:02:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.796 19:02:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:06.796 19:02:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:07.052 true 00:08:07.052 19:02:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:07.052 19:02:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.643 19:02:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.643 19:02:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:07.643 19:02:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:07.900 true 00:08:07.900 19:02:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:07.900 19:02:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.831 19:02:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.088 19:02:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:09.088 19:02:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:09.345 true 00:08:09.345 19:02:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:09.345 19:02:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.601 19:02:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.858 19:02:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:09.858 19:02:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:10.115 true 00:08:10.115 19:02:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:10.115 19:02:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.371 19:02:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.627 19:02:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:10.627 19:02:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:10.884 true 00:08:10.884 19:02:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:10.884 19:02:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.832 19:02:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.087 19:02:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:12.087 19:02:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:12.343 true 00:08:12.343 19:02:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:12.343 19:02:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.601 19:02:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.857 19:02:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:12.857 19:02:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:13.113 true 00:08:13.113 19:02:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:13.113 19:02:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.369 19:02:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.626 19:02:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:13.626 19:02:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:13.884 true 00:08:13.884 19:02:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:13.884 19:02:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.822 19:02:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.337 19:02:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:15.337 19:02:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:15.594 true 00:08:15.594 19:02:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:15.594 19:02:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.177 19:02:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.436 19:02:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:16.436 19:02:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:16.692 true 00:08:16.692 19:02:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:16.692 19:02:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.948 19:02:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.205 19:02:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:17.205 19:02:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:17.462 true 00:08:17.462 19:02:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:17.462 19:02:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.434 19:02:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.691 19:02:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:18.691 19:02:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:18.947 true 00:08:18.947 19:02:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:18.947 19:02:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.204 19:02:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.460 19:02:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:19.460 19:02:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:19.717 true 00:08:19.717 19:03:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:19.717 19:03:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.702 19:03:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.959 19:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:20.959 19:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:20.959 true 00:08:21.247 19:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:21.247 19:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.508 19:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.508 19:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:21.508 19:03:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:21.809 true 00:08:21.809 19:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:21.809 19:03:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.739 19:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.995 19:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:22.995 19:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:23.251 true 00:08:23.251 19:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:23.251 19:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.509 19:03:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.765 19:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:23.765 19:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:24.022 true 00:08:24.022 19:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:24.022 19:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.279 19:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.537 19:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:24.537 19:03:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:24.797 true 00:08:24.797 19:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:24.797 19:03:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.729 Initializing NVMe Controllers 00:08:25.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:25.729 Controller IO queue size 128, less than required. 00:08:25.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.729 Controller IO queue size 128, less than required. 00:08:25.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:25.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:25.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:25.729 Initialization complete. Launching workers. 00:08:25.729 ======================================================== 00:08:25.729 Latency(us) 00:08:25.729 Device Information : IOPS MiB/s Average min max 00:08:25.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1093.90 0.53 61760.05 2187.79 1013068.79 00:08:25.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10336.77 5.05 12382.88 2604.47 377482.30 00:08:25.729 ======================================================== 00:08:25.729 Total : 11430.67 5.58 17108.21 2187.79 1013068.79 00:08:25.729 00:08:25.729 19:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.988 19:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:25.988 19:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:26.246 true 00:08:26.246 19:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3227183 00:08:26.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3227183) - No such process 00:08:26.246 19:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3227183 00:08:26.246 19:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.503 19:03:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:26.761 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:26.761 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:26.761 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:26.761 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:26.761 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:27.018 null0 00:08:27.018 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.018 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.018 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:27.275 null1 00:08:27.275 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.275 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.275 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:27.532 null2 00:08:27.532 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.532 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.532 19:03:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:27.788 null3 00:08:27.788 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:27.788 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:27.788 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:28.045 null4 00:08:28.045 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.045 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.045 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:28.302 null5 00:08:28.302 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.302 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.302 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:28.560 null6 00:08:28.560 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.560 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.560 19:03:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:28.817 null7 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3231984 3231985 3231987 3231989 3231991 3231993 3231995 3231997 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:28.817 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:29.382 19:03:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:29.639 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:29.639 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.640 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:29.897 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:29.897 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:29.897 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:29.897 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:29.898 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.156 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.157 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.417 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:30.674 19:03:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:30.932 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.190 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.448 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:31.706 19:03:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.706 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:31.707 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:31.964 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.220 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:32.477 19:03:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:32.734 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:32.991 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:33.248 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:33.508 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:33.508 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.765 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:33.765 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:33.765 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:33.765 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.765 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:33.765 19:03:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.029 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.295 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.552 rmmod nvme_tcp 00:08:34.552 rmmod nvme_fabrics 00:08:34.552 rmmod nvme_keyring 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3226754 ']' 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3226754 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3226754 ']' 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3226754 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3226754 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3226754' 00:08:34.552 killing process with pid 3226754 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3226754 00:08:34.552 19:03:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3226754 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.811 19:03:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.344 19:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.344 00:08:37.344 real 0m47.958s 00:08:37.344 user 3m37.711s 00:08:37.344 sys 0m16.632s 00:08:37.344 19:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.344 19:03:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.344 ************************************ 00:08:37.344 END TEST nvmf_ns_hotplug_stress 00:08:37.344 ************************************ 00:08:37.344 19:03:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:37.344 19:03:17 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:37.344 19:03:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:37.344 19:03:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.344 19:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.344 ************************************ 00:08:37.344 START TEST nvmf_connect_stress 00:08:37.344 ************************************ 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:37.344 * Looking for test storage... 00:08:37.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.344 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.345 19:03:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.345 19:03:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.345 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:37.345 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:37.345 19:03:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.345 19:03:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.242 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.242 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.242 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.242 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.243 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:39.243 00:08:39.243 --- 10.0.0.2 ping statistics --- 00:08:39.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.243 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:08:39.243 00:08:39.243 --- 10.0.0.1 ping statistics --- 00:08:39.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.243 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3234749 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3234749 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3234749 ']' 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.243 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.243 [2024-07-15 19:03:19.543610] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:08:39.243 [2024-07-15 19:03:19.543689] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.243 [2024-07-15 19:03:19.606802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.501 [2024-07-15 19:03:19.717167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.501 [2024-07-15 19:03:19.717234] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.501 [2024-07-15 19:03:19.717248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.501 [2024-07-15 19:03:19.717261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.501 [2024-07-15 19:03:19.717271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.501 [2024-07-15 19:03:19.717360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.501 [2024-07-15 19:03:19.717410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.501 [2024-07-15 19:03:19.717413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.501 [2024-07-15 19:03:19.851223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.501 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.501 [2024-07-15 19:03:19.877022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.502 NULL1 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3234781 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.502 19:03:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.065 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.065 19:03:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:40.065 19:03:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.065 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.065 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.323 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.323 19:03:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:40.323 19:03:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.323 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.323 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.580 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.580 19:03:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:40.580 19:03:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.580 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.580 19:03:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.837 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.837 19:03:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:40.837 19:03:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.837 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.837 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.406 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.406 19:03:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:41.406 19:03:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.406 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.406 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.666 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.666 19:03:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:41.666 19:03:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.666 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.666 19:03:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.923 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.923 19:03:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:41.923 19:03:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.923 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.923 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.179 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.179 19:03:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:42.179 19:03:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.179 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.179 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.435 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.435 19:03:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:42.435 19:03:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.435 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.435 19:03:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.998 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.998 19:03:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:42.998 19:03:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.998 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.998 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.254 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.254 19:03:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:43.254 19:03:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.254 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.254 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.510 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.510 19:03:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:43.510 19:03:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.510 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.510 19:03:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.766 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.766 19:03:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:43.766 19:03:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.766 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.766 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.023 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.023 19:03:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:44.023 19:03:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.023 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.023 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.588 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.588 19:03:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:44.588 19:03:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.588 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.588 19:03:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.846 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.846 19:03:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:44.846 19:03:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.846 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.846 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.103 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.103 19:03:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:45.103 19:03:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.103 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.103 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.360 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.360 19:03:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:45.360 19:03:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.360 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.360 19:03:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.617 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.617 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:45.617 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.617 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.617 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.181 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.181 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:46.181 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.181 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.181 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.438 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.438 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:46.438 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.438 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.438 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.696 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.696 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:46.696 19:03:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.696 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.696 19:03:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.954 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.954 19:03:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:46.954 19:03:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.954 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.954 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.211 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.211 19:03:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:47.211 19:03:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.211 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.211 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.774 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.774 19:03:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:47.774 19:03:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.774 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.774 19:03:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.038 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.038 19:03:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:48.038 19:03:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.038 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.038 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.294 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.294 19:03:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:48.294 19:03:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.294 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.294 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.553 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.553 19:03:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:48.553 19:03:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.553 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.553 19:03:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.833 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.833 19:03:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:48.833 19:03:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.833 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.833 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.400 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.400 19:03:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:49.400 19:03:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.400 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.400 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.656 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.656 19:03:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:49.656 19:03:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.656 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.656 19:03:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.913 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3234781 00:08:49.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3234781) - No such process 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3234781 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.913 rmmod nvme_tcp 00:08:49.913 rmmod nvme_fabrics 00:08:49.913 rmmod nvme_keyring 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3234749 ']' 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3234749 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3234749 ']' 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3234749 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3234749 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3234749' 00:08:49.913 killing process with pid 3234749 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3234749 00:08:49.913 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3234749 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.169 19:03:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.699 19:03:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.699 00:08:52.699 real 0m15.341s 00:08:52.699 user 0m38.511s 00:08:52.699 sys 0m5.860s 00:08:52.699 19:03:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.699 19:03:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.699 ************************************ 00:08:52.699 END TEST nvmf_connect_stress 00:08:52.699 ************************************ 00:08:52.699 19:03:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.699 19:03:32 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:52.699 19:03:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.699 19:03:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.699 19:03:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.699 ************************************ 00:08:52.699 START TEST nvmf_fused_ordering 00:08:52.699 ************************************ 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:52.699 * Looking for test storage... 00:08:52.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.699 19:03:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:54.601 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:54.601 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:54.601 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:54.601 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:54.601 00:08:54.601 --- 10.0.0.2 ping statistics --- 00:08:54.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.601 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:08:54.601 00:08:54.601 --- 10.0.0.1 ping statistics --- 00:08:54.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.601 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3237920 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.601 19:03:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3237920 00:08:54.602 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3237920 ']' 00:08:54.602 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.602 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.602 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.602 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.602 19:03:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.602 [2024-07-15 19:03:34.765029] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:08:54.602 [2024-07-15 19:03:34.765112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.602 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.602 [2024-07-15 19:03:34.829493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.602 [2024-07-15 19:03:34.946201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.602 [2024-07-15 19:03:34.946263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.602 [2024-07-15 19:03:34.946292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.602 [2024-07-15 19:03:34.946303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.602 [2024-07-15 19:03:34.946313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.602 [2024-07-15 19:03:34.946338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.860 [2024-07-15 19:03:35.087332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.860 [2024-07-15 19:03:35.103496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.860 NULL1 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.860 19:03:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:54.860 [2024-07-15 19:03:35.151059] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:08:54.860 [2024-07-15 19:03:35.151102] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238066 ] 00:08:54.860 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.427 Attached to nqn.2016-06.io.spdk:cnode1 00:08:55.427 Namespace ID: 1 size: 1GB 00:08:55.427 fused_ordering(0) 00:08:55.427 fused_ordering(1) 00:08:55.427 fused_ordering(2) 00:08:55.427 fused_ordering(3) 00:08:55.427 fused_ordering(4) 00:08:55.427 fused_ordering(5) 00:08:55.427 fused_ordering(6) 00:08:55.427 fused_ordering(7) 00:08:55.427 fused_ordering(8) 00:08:55.427 fused_ordering(9) 00:08:55.427 fused_ordering(10) 00:08:55.427 fused_ordering(11) 00:08:55.427 fused_ordering(12) 00:08:55.427 fused_ordering(13) 00:08:55.427 fused_ordering(14) 00:08:55.427 fused_ordering(15) 00:08:55.427 fused_ordering(16) 00:08:55.427 fused_ordering(17) 00:08:55.427 fused_ordering(18) 00:08:55.427 fused_ordering(19) 00:08:55.427 fused_ordering(20) 00:08:55.427 fused_ordering(21) 00:08:55.427 fused_ordering(22) 00:08:55.427 fused_ordering(23) 00:08:55.427 fused_ordering(24) 00:08:55.427 fused_ordering(25) 00:08:55.427 fused_ordering(26) 00:08:55.427 fused_ordering(27) 00:08:55.427 fused_ordering(28) 00:08:55.427 fused_ordering(29) 00:08:55.427 fused_ordering(30) 00:08:55.427 fused_ordering(31) 00:08:55.427 fused_ordering(32) 00:08:55.427 fused_ordering(33) 00:08:55.427 fused_ordering(34) 00:08:55.427 fused_ordering(35) 00:08:55.427 fused_ordering(36) 00:08:55.427 fused_ordering(37) 00:08:55.427 fused_ordering(38) 00:08:55.427 fused_ordering(39) 00:08:55.427 fused_ordering(40) 00:08:55.427 fused_ordering(41) 00:08:55.427 fused_ordering(42) 00:08:55.427 fused_ordering(43) 00:08:55.427 fused_ordering(44) 00:08:55.427 fused_ordering(45) 00:08:55.427 fused_ordering(46) 00:08:55.427 fused_ordering(47) 00:08:55.427 fused_ordering(48) 00:08:55.427 fused_ordering(49) 00:08:55.427 fused_ordering(50) 00:08:55.427 fused_ordering(51) 00:08:55.427 fused_ordering(52) 00:08:55.427 fused_ordering(53) 00:08:55.427 fused_ordering(54) 00:08:55.427 fused_ordering(55) 00:08:55.427 fused_ordering(56) 00:08:55.427 fused_ordering(57) 00:08:55.427 fused_ordering(58) 00:08:55.427 fused_ordering(59) 00:08:55.427 fused_ordering(60) 00:08:55.427 fused_ordering(61) 00:08:55.427 fused_ordering(62) 00:08:55.427 fused_ordering(63) 00:08:55.427 fused_ordering(64) 00:08:55.427 fused_ordering(65) 00:08:55.427 fused_ordering(66) 00:08:55.427 fused_ordering(67) 00:08:55.427 fused_ordering(68) 00:08:55.427 fused_ordering(69) 00:08:55.427 fused_ordering(70) 00:08:55.427 fused_ordering(71) 00:08:55.427 fused_ordering(72) 00:08:55.427 fused_ordering(73) 00:08:55.427 fused_ordering(74) 00:08:55.427 fused_ordering(75) 00:08:55.427 fused_ordering(76) 00:08:55.427 fused_ordering(77) 00:08:55.427 fused_ordering(78) 00:08:55.427 fused_ordering(79) 00:08:55.427 fused_ordering(80) 00:08:55.427 fused_ordering(81) 00:08:55.427 fused_ordering(82) 00:08:55.427 fused_ordering(83) 00:08:55.427 fused_ordering(84) 00:08:55.427 fused_ordering(85) 00:08:55.427 fused_ordering(86) 00:08:55.427 fused_ordering(87) 00:08:55.427 fused_ordering(88) 00:08:55.427 fused_ordering(89) 00:08:55.427 fused_ordering(90) 00:08:55.427 fused_ordering(91) 00:08:55.427 fused_ordering(92) 00:08:55.427 fused_ordering(93) 00:08:55.427 fused_ordering(94) 00:08:55.427 fused_ordering(95) 00:08:55.427 fused_ordering(96) 00:08:55.427 fused_ordering(97) 00:08:55.427 fused_ordering(98) 00:08:55.427 fused_ordering(99) 00:08:55.427 fused_ordering(100) 00:08:55.427 fused_ordering(101) 00:08:55.427 fused_ordering(102) 00:08:55.427 fused_ordering(103) 00:08:55.427 fused_ordering(104) 00:08:55.427 fused_ordering(105) 00:08:55.427 fused_ordering(106) 00:08:55.427 fused_ordering(107) 00:08:55.427 fused_ordering(108) 00:08:55.427 fused_ordering(109) 00:08:55.427 fused_ordering(110) 00:08:55.427 fused_ordering(111) 00:08:55.427 fused_ordering(112) 00:08:55.427 fused_ordering(113) 00:08:55.427 fused_ordering(114) 00:08:55.427 fused_ordering(115) 00:08:55.427 fused_ordering(116) 00:08:55.427 fused_ordering(117) 00:08:55.427 fused_ordering(118) 00:08:55.427 fused_ordering(119) 00:08:55.427 fused_ordering(120) 00:08:55.427 fused_ordering(121) 00:08:55.427 fused_ordering(122) 00:08:55.427 fused_ordering(123) 00:08:55.427 fused_ordering(124) 00:08:55.427 fused_ordering(125) 00:08:55.427 fused_ordering(126) 00:08:55.427 fused_ordering(127) 00:08:55.427 fused_ordering(128) 00:08:55.427 fused_ordering(129) 00:08:55.427 fused_ordering(130) 00:08:55.427 fused_ordering(131) 00:08:55.427 fused_ordering(132) 00:08:55.427 fused_ordering(133) 00:08:55.427 fused_ordering(134) 00:08:55.427 fused_ordering(135) 00:08:55.427 fused_ordering(136) 00:08:55.427 fused_ordering(137) 00:08:55.427 fused_ordering(138) 00:08:55.427 fused_ordering(139) 00:08:55.427 fused_ordering(140) 00:08:55.427 fused_ordering(141) 00:08:55.427 fused_ordering(142) 00:08:55.427 fused_ordering(143) 00:08:55.427 fused_ordering(144) 00:08:55.427 fused_ordering(145) 00:08:55.427 fused_ordering(146) 00:08:55.427 fused_ordering(147) 00:08:55.427 fused_ordering(148) 00:08:55.427 fused_ordering(149) 00:08:55.427 fused_ordering(150) 00:08:55.427 fused_ordering(151) 00:08:55.427 fused_ordering(152) 00:08:55.427 fused_ordering(153) 00:08:55.427 fused_ordering(154) 00:08:55.427 fused_ordering(155) 00:08:55.427 fused_ordering(156) 00:08:55.427 fused_ordering(157) 00:08:55.427 fused_ordering(158) 00:08:55.427 fused_ordering(159) 00:08:55.427 fused_ordering(160) 00:08:55.427 fused_ordering(161) 00:08:55.427 fused_ordering(162) 00:08:55.427 fused_ordering(163) 00:08:55.427 fused_ordering(164) 00:08:55.427 fused_ordering(165) 00:08:55.427 fused_ordering(166) 00:08:55.427 fused_ordering(167) 00:08:55.427 fused_ordering(168) 00:08:55.427 fused_ordering(169) 00:08:55.427 fused_ordering(170) 00:08:55.427 fused_ordering(171) 00:08:55.427 fused_ordering(172) 00:08:55.427 fused_ordering(173) 00:08:55.427 fused_ordering(174) 00:08:55.427 fused_ordering(175) 00:08:55.427 fused_ordering(176) 00:08:55.427 fused_ordering(177) 00:08:55.427 fused_ordering(178) 00:08:55.427 fused_ordering(179) 00:08:55.427 fused_ordering(180) 00:08:55.427 fused_ordering(181) 00:08:55.427 fused_ordering(182) 00:08:55.427 fused_ordering(183) 00:08:55.427 fused_ordering(184) 00:08:55.427 fused_ordering(185) 00:08:55.427 fused_ordering(186) 00:08:55.427 fused_ordering(187) 00:08:55.427 fused_ordering(188) 00:08:55.427 fused_ordering(189) 00:08:55.427 fused_ordering(190) 00:08:55.427 fused_ordering(191) 00:08:55.427 fused_ordering(192) 00:08:55.427 fused_ordering(193) 00:08:55.427 fused_ordering(194) 00:08:55.427 fused_ordering(195) 00:08:55.427 fused_ordering(196) 00:08:55.427 fused_ordering(197) 00:08:55.427 fused_ordering(198) 00:08:55.427 fused_ordering(199) 00:08:55.427 fused_ordering(200) 00:08:55.427 fused_ordering(201) 00:08:55.427 fused_ordering(202) 00:08:55.427 fused_ordering(203) 00:08:55.427 fused_ordering(204) 00:08:55.427 fused_ordering(205) 00:08:55.992 fused_ordering(206) 00:08:55.992 fused_ordering(207) 00:08:55.992 fused_ordering(208) 00:08:55.992 fused_ordering(209) 00:08:55.992 fused_ordering(210) 00:08:55.992 fused_ordering(211) 00:08:55.992 fused_ordering(212) 00:08:55.992 fused_ordering(213) 00:08:55.992 fused_ordering(214) 00:08:55.992 fused_ordering(215) 00:08:55.992 fused_ordering(216) 00:08:55.992 fused_ordering(217) 00:08:55.992 fused_ordering(218) 00:08:55.992 fused_ordering(219) 00:08:55.992 fused_ordering(220) 00:08:55.992 fused_ordering(221) 00:08:55.992 fused_ordering(222) 00:08:55.992 fused_ordering(223) 00:08:55.992 fused_ordering(224) 00:08:55.992 fused_ordering(225) 00:08:55.992 fused_ordering(226) 00:08:55.992 fused_ordering(227) 00:08:55.992 fused_ordering(228) 00:08:55.992 fused_ordering(229) 00:08:55.992 fused_ordering(230) 00:08:55.992 fused_ordering(231) 00:08:55.992 fused_ordering(232) 00:08:55.992 fused_ordering(233) 00:08:55.992 fused_ordering(234) 00:08:55.992 fused_ordering(235) 00:08:55.992 fused_ordering(236) 00:08:55.992 fused_ordering(237) 00:08:55.992 fused_ordering(238) 00:08:55.992 fused_ordering(239) 00:08:55.992 fused_ordering(240) 00:08:55.992 fused_ordering(241) 00:08:55.992 fused_ordering(242) 00:08:55.992 fused_ordering(243) 00:08:55.992 fused_ordering(244) 00:08:55.992 fused_ordering(245) 00:08:55.992 fused_ordering(246) 00:08:55.992 fused_ordering(247) 00:08:55.992 fused_ordering(248) 00:08:55.992 fused_ordering(249) 00:08:55.992 fused_ordering(250) 00:08:55.992 fused_ordering(251) 00:08:55.992 fused_ordering(252) 00:08:55.992 fused_ordering(253) 00:08:55.992 fused_ordering(254) 00:08:55.992 fused_ordering(255) 00:08:55.992 fused_ordering(256) 00:08:55.992 fused_ordering(257) 00:08:55.992 fused_ordering(258) 00:08:55.992 fused_ordering(259) 00:08:55.992 fused_ordering(260) 00:08:55.992 fused_ordering(261) 00:08:55.992 fused_ordering(262) 00:08:55.992 fused_ordering(263) 00:08:55.992 fused_ordering(264) 00:08:55.992 fused_ordering(265) 00:08:55.992 fused_ordering(266) 00:08:55.992 fused_ordering(267) 00:08:55.992 fused_ordering(268) 00:08:55.992 fused_ordering(269) 00:08:55.992 fused_ordering(270) 00:08:55.992 fused_ordering(271) 00:08:55.992 fused_ordering(272) 00:08:55.992 fused_ordering(273) 00:08:55.992 fused_ordering(274) 00:08:55.992 fused_ordering(275) 00:08:55.992 fused_ordering(276) 00:08:55.992 fused_ordering(277) 00:08:55.992 fused_ordering(278) 00:08:55.992 fused_ordering(279) 00:08:55.992 fused_ordering(280) 00:08:55.992 fused_ordering(281) 00:08:55.992 fused_ordering(282) 00:08:55.992 fused_ordering(283) 00:08:55.992 fused_ordering(284) 00:08:55.992 fused_ordering(285) 00:08:55.992 fused_ordering(286) 00:08:55.992 fused_ordering(287) 00:08:55.992 fused_ordering(288) 00:08:55.992 fused_ordering(289) 00:08:55.992 fused_ordering(290) 00:08:55.992 fused_ordering(291) 00:08:55.992 fused_ordering(292) 00:08:55.992 fused_ordering(293) 00:08:55.992 fused_ordering(294) 00:08:55.992 fused_ordering(295) 00:08:55.992 fused_ordering(296) 00:08:55.992 fused_ordering(297) 00:08:55.992 fused_ordering(298) 00:08:55.992 fused_ordering(299) 00:08:55.992 fused_ordering(300) 00:08:55.992 fused_ordering(301) 00:08:55.992 fused_ordering(302) 00:08:55.992 fused_ordering(303) 00:08:55.992 fused_ordering(304) 00:08:55.992 fused_ordering(305) 00:08:55.993 fused_ordering(306) 00:08:55.993 fused_ordering(307) 00:08:55.993 fused_ordering(308) 00:08:55.993 fused_ordering(309) 00:08:55.993 fused_ordering(310) 00:08:55.993 fused_ordering(311) 00:08:55.993 fused_ordering(312) 00:08:55.993 fused_ordering(313) 00:08:55.993 fused_ordering(314) 00:08:55.993 fused_ordering(315) 00:08:55.993 fused_ordering(316) 00:08:55.993 fused_ordering(317) 00:08:55.993 fused_ordering(318) 00:08:55.993 fused_ordering(319) 00:08:55.993 fused_ordering(320) 00:08:55.993 fused_ordering(321) 00:08:55.993 fused_ordering(322) 00:08:55.993 fused_ordering(323) 00:08:55.993 fused_ordering(324) 00:08:55.993 fused_ordering(325) 00:08:55.993 fused_ordering(326) 00:08:55.993 fused_ordering(327) 00:08:55.993 fused_ordering(328) 00:08:55.993 fused_ordering(329) 00:08:55.993 fused_ordering(330) 00:08:55.993 fused_ordering(331) 00:08:55.993 fused_ordering(332) 00:08:55.993 fused_ordering(333) 00:08:55.993 fused_ordering(334) 00:08:55.993 fused_ordering(335) 00:08:55.993 fused_ordering(336) 00:08:55.993 fused_ordering(337) 00:08:55.993 fused_ordering(338) 00:08:55.993 fused_ordering(339) 00:08:55.993 fused_ordering(340) 00:08:55.993 fused_ordering(341) 00:08:55.993 fused_ordering(342) 00:08:55.993 fused_ordering(343) 00:08:55.993 fused_ordering(344) 00:08:55.993 fused_ordering(345) 00:08:55.993 fused_ordering(346) 00:08:55.993 fused_ordering(347) 00:08:55.993 fused_ordering(348) 00:08:55.993 fused_ordering(349) 00:08:55.993 fused_ordering(350) 00:08:55.993 fused_ordering(351) 00:08:55.993 fused_ordering(352) 00:08:55.993 fused_ordering(353) 00:08:55.993 fused_ordering(354) 00:08:55.993 fused_ordering(355) 00:08:55.993 fused_ordering(356) 00:08:55.993 fused_ordering(357) 00:08:55.993 fused_ordering(358) 00:08:55.993 fused_ordering(359) 00:08:55.993 fused_ordering(360) 00:08:55.993 fused_ordering(361) 00:08:55.993 fused_ordering(362) 00:08:55.993 fused_ordering(363) 00:08:55.993 fused_ordering(364) 00:08:55.993 fused_ordering(365) 00:08:55.993 fused_ordering(366) 00:08:55.993 fused_ordering(367) 00:08:55.993 fused_ordering(368) 00:08:55.993 fused_ordering(369) 00:08:55.993 fused_ordering(370) 00:08:55.993 fused_ordering(371) 00:08:55.993 fused_ordering(372) 00:08:55.993 fused_ordering(373) 00:08:55.993 fused_ordering(374) 00:08:55.993 fused_ordering(375) 00:08:55.993 fused_ordering(376) 00:08:55.993 fused_ordering(377) 00:08:55.993 fused_ordering(378) 00:08:55.993 fused_ordering(379) 00:08:55.993 fused_ordering(380) 00:08:55.993 fused_ordering(381) 00:08:55.993 fused_ordering(382) 00:08:55.993 fused_ordering(383) 00:08:55.993 fused_ordering(384) 00:08:55.993 fused_ordering(385) 00:08:55.993 fused_ordering(386) 00:08:55.993 fused_ordering(387) 00:08:55.993 fused_ordering(388) 00:08:55.993 fused_ordering(389) 00:08:55.993 fused_ordering(390) 00:08:55.993 fused_ordering(391) 00:08:55.993 fused_ordering(392) 00:08:55.993 fused_ordering(393) 00:08:55.993 fused_ordering(394) 00:08:55.993 fused_ordering(395) 00:08:55.993 fused_ordering(396) 00:08:55.993 fused_ordering(397) 00:08:55.993 fused_ordering(398) 00:08:55.993 fused_ordering(399) 00:08:55.993 fused_ordering(400) 00:08:55.993 fused_ordering(401) 00:08:55.993 fused_ordering(402) 00:08:55.993 fused_ordering(403) 00:08:55.993 fused_ordering(404) 00:08:55.993 fused_ordering(405) 00:08:55.993 fused_ordering(406) 00:08:55.993 fused_ordering(407) 00:08:55.993 fused_ordering(408) 00:08:55.993 fused_ordering(409) 00:08:55.993 fused_ordering(410) 00:08:56.559 fused_ordering(411) 00:08:56.559 fused_ordering(412) 00:08:56.559 fused_ordering(413) 00:08:56.559 fused_ordering(414) 00:08:56.559 fused_ordering(415) 00:08:56.559 fused_ordering(416) 00:08:56.559 fused_ordering(417) 00:08:56.559 fused_ordering(418) 00:08:56.559 fused_ordering(419) 00:08:56.559 fused_ordering(420) 00:08:56.559 fused_ordering(421) 00:08:56.559 fused_ordering(422) 00:08:56.559 fused_ordering(423) 00:08:56.559 fused_ordering(424) 00:08:56.559 fused_ordering(425) 00:08:56.559 fused_ordering(426) 00:08:56.559 fused_ordering(427) 00:08:56.559 fused_ordering(428) 00:08:56.559 fused_ordering(429) 00:08:56.559 fused_ordering(430) 00:08:56.559 fused_ordering(431) 00:08:56.559 fused_ordering(432) 00:08:56.559 fused_ordering(433) 00:08:56.559 fused_ordering(434) 00:08:56.559 fused_ordering(435) 00:08:56.559 fused_ordering(436) 00:08:56.559 fused_ordering(437) 00:08:56.559 fused_ordering(438) 00:08:56.559 fused_ordering(439) 00:08:56.559 fused_ordering(440) 00:08:56.559 fused_ordering(441) 00:08:56.559 fused_ordering(442) 00:08:56.559 fused_ordering(443) 00:08:56.559 fused_ordering(444) 00:08:56.559 fused_ordering(445) 00:08:56.559 fused_ordering(446) 00:08:56.559 fused_ordering(447) 00:08:56.559 fused_ordering(448) 00:08:56.559 fused_ordering(449) 00:08:56.559 fused_ordering(450) 00:08:56.559 fused_ordering(451) 00:08:56.559 fused_ordering(452) 00:08:56.559 fused_ordering(453) 00:08:56.559 fused_ordering(454) 00:08:56.559 fused_ordering(455) 00:08:56.559 fused_ordering(456) 00:08:56.559 fused_ordering(457) 00:08:56.559 fused_ordering(458) 00:08:56.559 fused_ordering(459) 00:08:56.559 fused_ordering(460) 00:08:56.559 fused_ordering(461) 00:08:56.559 fused_ordering(462) 00:08:56.559 fused_ordering(463) 00:08:56.559 fused_ordering(464) 00:08:56.559 fused_ordering(465) 00:08:56.559 fused_ordering(466) 00:08:56.559 fused_ordering(467) 00:08:56.559 fused_ordering(468) 00:08:56.559 fused_ordering(469) 00:08:56.559 fused_ordering(470) 00:08:56.559 fused_ordering(471) 00:08:56.559 fused_ordering(472) 00:08:56.559 fused_ordering(473) 00:08:56.559 fused_ordering(474) 00:08:56.559 fused_ordering(475) 00:08:56.559 fused_ordering(476) 00:08:56.559 fused_ordering(477) 00:08:56.559 fused_ordering(478) 00:08:56.559 fused_ordering(479) 00:08:56.559 fused_ordering(480) 00:08:56.559 fused_ordering(481) 00:08:56.559 fused_ordering(482) 00:08:56.559 fused_ordering(483) 00:08:56.559 fused_ordering(484) 00:08:56.559 fused_ordering(485) 00:08:56.559 fused_ordering(486) 00:08:56.559 fused_ordering(487) 00:08:56.559 fused_ordering(488) 00:08:56.559 fused_ordering(489) 00:08:56.559 fused_ordering(490) 00:08:56.559 fused_ordering(491) 00:08:56.559 fused_ordering(492) 00:08:56.559 fused_ordering(493) 00:08:56.559 fused_ordering(494) 00:08:56.559 fused_ordering(495) 00:08:56.559 fused_ordering(496) 00:08:56.559 fused_ordering(497) 00:08:56.559 fused_ordering(498) 00:08:56.559 fused_ordering(499) 00:08:56.559 fused_ordering(500) 00:08:56.559 fused_ordering(501) 00:08:56.559 fused_ordering(502) 00:08:56.559 fused_ordering(503) 00:08:56.559 fused_ordering(504) 00:08:56.559 fused_ordering(505) 00:08:56.559 fused_ordering(506) 00:08:56.559 fused_ordering(507) 00:08:56.559 fused_ordering(508) 00:08:56.559 fused_ordering(509) 00:08:56.559 fused_ordering(510) 00:08:56.559 fused_ordering(511) 00:08:56.559 fused_ordering(512) 00:08:56.559 fused_ordering(513) 00:08:56.559 fused_ordering(514) 00:08:56.559 fused_ordering(515) 00:08:56.559 fused_ordering(516) 00:08:56.559 fused_ordering(517) 00:08:56.559 fused_ordering(518) 00:08:56.559 fused_ordering(519) 00:08:56.559 fused_ordering(520) 00:08:56.559 fused_ordering(521) 00:08:56.559 fused_ordering(522) 00:08:56.559 fused_ordering(523) 00:08:56.559 fused_ordering(524) 00:08:56.559 fused_ordering(525) 00:08:56.559 fused_ordering(526) 00:08:56.559 fused_ordering(527) 00:08:56.559 fused_ordering(528) 00:08:56.559 fused_ordering(529) 00:08:56.559 fused_ordering(530) 00:08:56.559 fused_ordering(531) 00:08:56.559 fused_ordering(532) 00:08:56.559 fused_ordering(533) 00:08:56.559 fused_ordering(534) 00:08:56.559 fused_ordering(535) 00:08:56.559 fused_ordering(536) 00:08:56.559 fused_ordering(537) 00:08:56.559 fused_ordering(538) 00:08:56.559 fused_ordering(539) 00:08:56.559 fused_ordering(540) 00:08:56.559 fused_ordering(541) 00:08:56.559 fused_ordering(542) 00:08:56.559 fused_ordering(543) 00:08:56.559 fused_ordering(544) 00:08:56.559 fused_ordering(545) 00:08:56.559 fused_ordering(546) 00:08:56.559 fused_ordering(547) 00:08:56.559 fused_ordering(548) 00:08:56.559 fused_ordering(549) 00:08:56.559 fused_ordering(550) 00:08:56.559 fused_ordering(551) 00:08:56.559 fused_ordering(552) 00:08:56.559 fused_ordering(553) 00:08:56.559 fused_ordering(554) 00:08:56.559 fused_ordering(555) 00:08:56.559 fused_ordering(556) 00:08:56.559 fused_ordering(557) 00:08:56.559 fused_ordering(558) 00:08:56.559 fused_ordering(559) 00:08:56.559 fused_ordering(560) 00:08:56.559 fused_ordering(561) 00:08:56.559 fused_ordering(562) 00:08:56.559 fused_ordering(563) 00:08:56.559 fused_ordering(564) 00:08:56.559 fused_ordering(565) 00:08:56.559 fused_ordering(566) 00:08:56.559 fused_ordering(567) 00:08:56.559 fused_ordering(568) 00:08:56.559 fused_ordering(569) 00:08:56.559 fused_ordering(570) 00:08:56.559 fused_ordering(571) 00:08:56.559 fused_ordering(572) 00:08:56.559 fused_ordering(573) 00:08:56.559 fused_ordering(574) 00:08:56.560 fused_ordering(575) 00:08:56.560 fused_ordering(576) 00:08:56.560 fused_ordering(577) 00:08:56.560 fused_ordering(578) 00:08:56.560 fused_ordering(579) 00:08:56.560 fused_ordering(580) 00:08:56.560 fused_ordering(581) 00:08:56.560 fused_ordering(582) 00:08:56.560 fused_ordering(583) 00:08:56.560 fused_ordering(584) 00:08:56.560 fused_ordering(585) 00:08:56.560 fused_ordering(586) 00:08:56.560 fused_ordering(587) 00:08:56.560 fused_ordering(588) 00:08:56.560 fused_ordering(589) 00:08:56.560 fused_ordering(590) 00:08:56.560 fused_ordering(591) 00:08:56.560 fused_ordering(592) 00:08:56.560 fused_ordering(593) 00:08:56.560 fused_ordering(594) 00:08:56.560 fused_ordering(595) 00:08:56.560 fused_ordering(596) 00:08:56.560 fused_ordering(597) 00:08:56.560 fused_ordering(598) 00:08:56.560 fused_ordering(599) 00:08:56.560 fused_ordering(600) 00:08:56.560 fused_ordering(601) 00:08:56.560 fused_ordering(602) 00:08:56.560 fused_ordering(603) 00:08:56.560 fused_ordering(604) 00:08:56.560 fused_ordering(605) 00:08:56.560 fused_ordering(606) 00:08:56.560 fused_ordering(607) 00:08:56.560 fused_ordering(608) 00:08:56.560 fused_ordering(609) 00:08:56.560 fused_ordering(610) 00:08:56.560 fused_ordering(611) 00:08:56.560 fused_ordering(612) 00:08:56.560 fused_ordering(613) 00:08:56.560 fused_ordering(614) 00:08:56.560 fused_ordering(615) 00:08:57.126 fused_ordering(616) 00:08:57.126 fused_ordering(617) 00:08:57.126 fused_ordering(618) 00:08:57.126 fused_ordering(619) 00:08:57.126 fused_ordering(620) 00:08:57.126 fused_ordering(621) 00:08:57.126 fused_ordering(622) 00:08:57.126 fused_ordering(623) 00:08:57.126 fused_ordering(624) 00:08:57.126 fused_ordering(625) 00:08:57.126 fused_ordering(626) 00:08:57.126 fused_ordering(627) 00:08:57.126 fused_ordering(628) 00:08:57.126 fused_ordering(629) 00:08:57.126 fused_ordering(630) 00:08:57.126 fused_ordering(631) 00:08:57.126 fused_ordering(632) 00:08:57.126 fused_ordering(633) 00:08:57.126 fused_ordering(634) 00:08:57.126 fused_ordering(635) 00:08:57.126 fused_ordering(636) 00:08:57.126 fused_ordering(637) 00:08:57.126 fused_ordering(638) 00:08:57.126 fused_ordering(639) 00:08:57.126 fused_ordering(640) 00:08:57.126 fused_ordering(641) 00:08:57.126 fused_ordering(642) 00:08:57.126 fused_ordering(643) 00:08:57.126 fused_ordering(644) 00:08:57.126 fused_ordering(645) 00:08:57.126 fused_ordering(646) 00:08:57.126 fused_ordering(647) 00:08:57.126 fused_ordering(648) 00:08:57.126 fused_ordering(649) 00:08:57.126 fused_ordering(650) 00:08:57.126 fused_ordering(651) 00:08:57.126 fused_ordering(652) 00:08:57.126 fused_ordering(653) 00:08:57.126 fused_ordering(654) 00:08:57.126 fused_ordering(655) 00:08:57.126 fused_ordering(656) 00:08:57.126 fused_ordering(657) 00:08:57.126 fused_ordering(658) 00:08:57.126 fused_ordering(659) 00:08:57.126 fused_ordering(660) 00:08:57.126 fused_ordering(661) 00:08:57.126 fused_ordering(662) 00:08:57.126 fused_ordering(663) 00:08:57.126 fused_ordering(664) 00:08:57.126 fused_ordering(665) 00:08:57.126 fused_ordering(666) 00:08:57.126 fused_ordering(667) 00:08:57.126 fused_ordering(668) 00:08:57.126 fused_ordering(669) 00:08:57.126 fused_ordering(670) 00:08:57.126 fused_ordering(671) 00:08:57.126 fused_ordering(672) 00:08:57.126 fused_ordering(673) 00:08:57.126 fused_ordering(674) 00:08:57.126 fused_ordering(675) 00:08:57.126 fused_ordering(676) 00:08:57.126 fused_ordering(677) 00:08:57.126 fused_ordering(678) 00:08:57.126 fused_ordering(679) 00:08:57.126 fused_ordering(680) 00:08:57.126 fused_ordering(681) 00:08:57.126 fused_ordering(682) 00:08:57.126 fused_ordering(683) 00:08:57.126 fused_ordering(684) 00:08:57.126 fused_ordering(685) 00:08:57.126 fused_ordering(686) 00:08:57.126 fused_ordering(687) 00:08:57.126 fused_ordering(688) 00:08:57.126 fused_ordering(689) 00:08:57.126 fused_ordering(690) 00:08:57.126 fused_ordering(691) 00:08:57.126 fused_ordering(692) 00:08:57.126 fused_ordering(693) 00:08:57.126 fused_ordering(694) 00:08:57.126 fused_ordering(695) 00:08:57.126 fused_ordering(696) 00:08:57.126 fused_ordering(697) 00:08:57.126 fused_ordering(698) 00:08:57.126 fused_ordering(699) 00:08:57.126 fused_ordering(700) 00:08:57.126 fused_ordering(701) 00:08:57.126 fused_ordering(702) 00:08:57.126 fused_ordering(703) 00:08:57.126 fused_ordering(704) 00:08:57.126 fused_ordering(705) 00:08:57.126 fused_ordering(706) 00:08:57.126 fused_ordering(707) 00:08:57.126 fused_ordering(708) 00:08:57.126 fused_ordering(709) 00:08:57.126 fused_ordering(710) 00:08:57.126 fused_ordering(711) 00:08:57.126 fused_ordering(712) 00:08:57.126 fused_ordering(713) 00:08:57.126 fused_ordering(714) 00:08:57.126 fused_ordering(715) 00:08:57.126 fused_ordering(716) 00:08:57.126 fused_ordering(717) 00:08:57.126 fused_ordering(718) 00:08:57.126 fused_ordering(719) 00:08:57.126 fused_ordering(720) 00:08:57.126 fused_ordering(721) 00:08:57.126 fused_ordering(722) 00:08:57.126 fused_ordering(723) 00:08:57.126 fused_ordering(724) 00:08:57.126 fused_ordering(725) 00:08:57.126 fused_ordering(726) 00:08:57.126 fused_ordering(727) 00:08:57.126 fused_ordering(728) 00:08:57.126 fused_ordering(729) 00:08:57.126 fused_ordering(730) 00:08:57.126 fused_ordering(731) 00:08:57.126 fused_ordering(732) 00:08:57.126 fused_ordering(733) 00:08:57.126 fused_ordering(734) 00:08:57.126 fused_ordering(735) 00:08:57.126 fused_ordering(736) 00:08:57.126 fused_ordering(737) 00:08:57.126 fused_ordering(738) 00:08:57.126 fused_ordering(739) 00:08:57.126 fused_ordering(740) 00:08:57.126 fused_ordering(741) 00:08:57.126 fused_ordering(742) 00:08:57.126 fused_ordering(743) 00:08:57.126 fused_ordering(744) 00:08:57.126 fused_ordering(745) 00:08:57.126 fused_ordering(746) 00:08:57.126 fused_ordering(747) 00:08:57.126 fused_ordering(748) 00:08:57.126 fused_ordering(749) 00:08:57.126 fused_ordering(750) 00:08:57.126 fused_ordering(751) 00:08:57.126 fused_ordering(752) 00:08:57.126 fused_ordering(753) 00:08:57.126 fused_ordering(754) 00:08:57.126 fused_ordering(755) 00:08:57.126 fused_ordering(756) 00:08:57.126 fused_ordering(757) 00:08:57.126 fused_ordering(758) 00:08:57.126 fused_ordering(759) 00:08:57.126 fused_ordering(760) 00:08:57.126 fused_ordering(761) 00:08:57.126 fused_ordering(762) 00:08:57.126 fused_ordering(763) 00:08:57.126 fused_ordering(764) 00:08:57.126 fused_ordering(765) 00:08:57.126 fused_ordering(766) 00:08:57.126 fused_ordering(767) 00:08:57.126 fused_ordering(768) 00:08:57.126 fused_ordering(769) 00:08:57.126 fused_ordering(770) 00:08:57.126 fused_ordering(771) 00:08:57.126 fused_ordering(772) 00:08:57.126 fused_ordering(773) 00:08:57.126 fused_ordering(774) 00:08:57.126 fused_ordering(775) 00:08:57.126 fused_ordering(776) 00:08:57.126 fused_ordering(777) 00:08:57.126 fused_ordering(778) 00:08:57.126 fused_ordering(779) 00:08:57.126 fused_ordering(780) 00:08:57.126 fused_ordering(781) 00:08:57.126 fused_ordering(782) 00:08:57.126 fused_ordering(783) 00:08:57.126 fused_ordering(784) 00:08:57.126 fused_ordering(785) 00:08:57.126 fused_ordering(786) 00:08:57.126 fused_ordering(787) 00:08:57.126 fused_ordering(788) 00:08:57.126 fused_ordering(789) 00:08:57.126 fused_ordering(790) 00:08:57.126 fused_ordering(791) 00:08:57.126 fused_ordering(792) 00:08:57.126 fused_ordering(793) 00:08:57.126 fused_ordering(794) 00:08:57.126 fused_ordering(795) 00:08:57.126 fused_ordering(796) 00:08:57.126 fused_ordering(797) 00:08:57.126 fused_ordering(798) 00:08:57.126 fused_ordering(799) 00:08:57.126 fused_ordering(800) 00:08:57.126 fused_ordering(801) 00:08:57.126 fused_ordering(802) 00:08:57.126 fused_ordering(803) 00:08:57.126 fused_ordering(804) 00:08:57.126 fused_ordering(805) 00:08:57.126 fused_ordering(806) 00:08:57.126 fused_ordering(807) 00:08:57.126 fused_ordering(808) 00:08:57.126 fused_ordering(809) 00:08:57.126 fused_ordering(810) 00:08:57.126 fused_ordering(811) 00:08:57.126 fused_ordering(812) 00:08:57.126 fused_ordering(813) 00:08:57.126 fused_ordering(814) 00:08:57.126 fused_ordering(815) 00:08:57.126 fused_ordering(816) 00:08:57.126 fused_ordering(817) 00:08:57.126 fused_ordering(818) 00:08:57.126 fused_ordering(819) 00:08:57.126 fused_ordering(820) 00:08:58.059 fused_ordering(821) 00:08:58.059 fused_ordering(822) 00:08:58.059 fused_ordering(823) 00:08:58.059 fused_ordering(824) 00:08:58.059 fused_ordering(825) 00:08:58.059 fused_ordering(826) 00:08:58.059 fused_ordering(827) 00:08:58.059 fused_ordering(828) 00:08:58.059 fused_ordering(829) 00:08:58.059 fused_ordering(830) 00:08:58.059 fused_ordering(831) 00:08:58.059 fused_ordering(832) 00:08:58.059 fused_ordering(833) 00:08:58.059 fused_ordering(834) 00:08:58.059 fused_ordering(835) 00:08:58.059 fused_ordering(836) 00:08:58.059 fused_ordering(837) 00:08:58.059 fused_ordering(838) 00:08:58.059 fused_ordering(839) 00:08:58.059 fused_ordering(840) 00:08:58.059 fused_ordering(841) 00:08:58.059 fused_ordering(842) 00:08:58.059 fused_ordering(843) 00:08:58.059 fused_ordering(844) 00:08:58.059 fused_ordering(845) 00:08:58.060 fused_ordering(846) 00:08:58.060 fused_ordering(847) 00:08:58.060 fused_ordering(848) 00:08:58.060 fused_ordering(849) 00:08:58.060 fused_ordering(850) 00:08:58.060 fused_ordering(851) 00:08:58.060 fused_ordering(852) 00:08:58.060 fused_ordering(853) 00:08:58.060 fused_ordering(854) 00:08:58.060 fused_ordering(855) 00:08:58.060 fused_ordering(856) 00:08:58.060 fused_ordering(857) 00:08:58.060 fused_ordering(858) 00:08:58.060 fused_ordering(859) 00:08:58.060 fused_ordering(860) 00:08:58.060 fused_ordering(861) 00:08:58.060 fused_ordering(862) 00:08:58.060 fused_ordering(863) 00:08:58.060 fused_ordering(864) 00:08:58.060 fused_ordering(865) 00:08:58.060 fused_ordering(866) 00:08:58.060 fused_ordering(867) 00:08:58.060 fused_ordering(868) 00:08:58.060 fused_ordering(869) 00:08:58.060 fused_ordering(870) 00:08:58.060 fused_ordering(871) 00:08:58.060 fused_ordering(872) 00:08:58.060 fused_ordering(873) 00:08:58.060 fused_ordering(874) 00:08:58.060 fused_ordering(875) 00:08:58.060 fused_ordering(876) 00:08:58.060 fused_ordering(877) 00:08:58.060 fused_ordering(878) 00:08:58.060 fused_ordering(879) 00:08:58.060 fused_ordering(880) 00:08:58.060 fused_ordering(881) 00:08:58.060 fused_ordering(882) 00:08:58.060 fused_ordering(883) 00:08:58.060 fused_ordering(884) 00:08:58.060 fused_ordering(885) 00:08:58.060 fused_ordering(886) 00:08:58.060 fused_ordering(887) 00:08:58.060 fused_ordering(888) 00:08:58.060 fused_ordering(889) 00:08:58.060 fused_ordering(890) 00:08:58.060 fused_ordering(891) 00:08:58.060 fused_ordering(892) 00:08:58.060 fused_ordering(893) 00:08:58.060 fused_ordering(894) 00:08:58.060 fused_ordering(895) 00:08:58.060 fused_ordering(896) 00:08:58.060 fused_ordering(897) 00:08:58.060 fused_ordering(898) 00:08:58.060 fused_ordering(899) 00:08:58.060 fused_ordering(900) 00:08:58.060 fused_ordering(901) 00:08:58.060 fused_ordering(902) 00:08:58.060 fused_ordering(903) 00:08:58.060 fused_ordering(904) 00:08:58.060 fused_ordering(905) 00:08:58.060 fused_ordering(906) 00:08:58.060 fused_ordering(907) 00:08:58.060 fused_ordering(908) 00:08:58.060 fused_ordering(909) 00:08:58.060 fused_ordering(910) 00:08:58.060 fused_ordering(911) 00:08:58.060 fused_ordering(912) 00:08:58.060 fused_ordering(913) 00:08:58.060 fused_ordering(914) 00:08:58.060 fused_ordering(915) 00:08:58.060 fused_ordering(916) 00:08:58.060 fused_ordering(917) 00:08:58.060 fused_ordering(918) 00:08:58.060 fused_ordering(919) 00:08:58.060 fused_ordering(920) 00:08:58.060 fused_ordering(921) 00:08:58.060 fused_ordering(922) 00:08:58.060 fused_ordering(923) 00:08:58.060 fused_ordering(924) 00:08:58.060 fused_ordering(925) 00:08:58.060 fused_ordering(926) 00:08:58.060 fused_ordering(927) 00:08:58.060 fused_ordering(928) 00:08:58.060 fused_ordering(929) 00:08:58.060 fused_ordering(930) 00:08:58.060 fused_ordering(931) 00:08:58.060 fused_ordering(932) 00:08:58.060 fused_ordering(933) 00:08:58.060 fused_ordering(934) 00:08:58.060 fused_ordering(935) 00:08:58.060 fused_ordering(936) 00:08:58.060 fused_ordering(937) 00:08:58.060 fused_ordering(938) 00:08:58.060 fused_ordering(939) 00:08:58.060 fused_ordering(940) 00:08:58.060 fused_ordering(941) 00:08:58.060 fused_ordering(942) 00:08:58.060 fused_ordering(943) 00:08:58.060 fused_ordering(944) 00:08:58.060 fused_ordering(945) 00:08:58.060 fused_ordering(946) 00:08:58.060 fused_ordering(947) 00:08:58.060 fused_ordering(948) 00:08:58.060 fused_ordering(949) 00:08:58.060 fused_ordering(950) 00:08:58.060 fused_ordering(951) 00:08:58.060 fused_ordering(952) 00:08:58.060 fused_ordering(953) 00:08:58.060 fused_ordering(954) 00:08:58.060 fused_ordering(955) 00:08:58.060 fused_ordering(956) 00:08:58.060 fused_ordering(957) 00:08:58.060 fused_ordering(958) 00:08:58.060 fused_ordering(959) 00:08:58.060 fused_ordering(960) 00:08:58.060 fused_ordering(961) 00:08:58.060 fused_ordering(962) 00:08:58.060 fused_ordering(963) 00:08:58.060 fused_ordering(964) 00:08:58.060 fused_ordering(965) 00:08:58.060 fused_ordering(966) 00:08:58.060 fused_ordering(967) 00:08:58.060 fused_ordering(968) 00:08:58.060 fused_ordering(969) 00:08:58.060 fused_ordering(970) 00:08:58.060 fused_ordering(971) 00:08:58.060 fused_ordering(972) 00:08:58.060 fused_ordering(973) 00:08:58.060 fused_ordering(974) 00:08:58.060 fused_ordering(975) 00:08:58.060 fused_ordering(976) 00:08:58.060 fused_ordering(977) 00:08:58.060 fused_ordering(978) 00:08:58.060 fused_ordering(979) 00:08:58.060 fused_ordering(980) 00:08:58.060 fused_ordering(981) 00:08:58.060 fused_ordering(982) 00:08:58.060 fused_ordering(983) 00:08:58.060 fused_ordering(984) 00:08:58.060 fused_ordering(985) 00:08:58.060 fused_ordering(986) 00:08:58.060 fused_ordering(987) 00:08:58.060 fused_ordering(988) 00:08:58.060 fused_ordering(989) 00:08:58.060 fused_ordering(990) 00:08:58.060 fused_ordering(991) 00:08:58.060 fused_ordering(992) 00:08:58.060 fused_ordering(993) 00:08:58.060 fused_ordering(994) 00:08:58.060 fused_ordering(995) 00:08:58.060 fused_ordering(996) 00:08:58.060 fused_ordering(997) 00:08:58.060 fused_ordering(998) 00:08:58.060 fused_ordering(999) 00:08:58.060 fused_ordering(1000) 00:08:58.060 fused_ordering(1001) 00:08:58.060 fused_ordering(1002) 00:08:58.060 fused_ordering(1003) 00:08:58.060 fused_ordering(1004) 00:08:58.060 fused_ordering(1005) 00:08:58.060 fused_ordering(1006) 00:08:58.060 fused_ordering(1007) 00:08:58.060 fused_ordering(1008) 00:08:58.060 fused_ordering(1009) 00:08:58.060 fused_ordering(1010) 00:08:58.060 fused_ordering(1011) 00:08:58.060 fused_ordering(1012) 00:08:58.060 fused_ordering(1013) 00:08:58.060 fused_ordering(1014) 00:08:58.060 fused_ordering(1015) 00:08:58.060 fused_ordering(1016) 00:08:58.060 fused_ordering(1017) 00:08:58.060 fused_ordering(1018) 00:08:58.060 fused_ordering(1019) 00:08:58.060 fused_ordering(1020) 00:08:58.060 fused_ordering(1021) 00:08:58.060 fused_ordering(1022) 00:08:58.060 fused_ordering(1023) 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.060 rmmod nvme_tcp 00:08:58.060 rmmod nvme_fabrics 00:08:58.060 rmmod nvme_keyring 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3237920 ']' 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3237920 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3237920 ']' 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3237920 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3237920 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3237920' 00:08:58.060 killing process with pid 3237920 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3237920 00:08:58.060 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3237920 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.319 19:03:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.233 19:03:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.233 00:09:00.233 real 0m7.999s 00:09:00.233 user 0m5.849s 00:09:00.233 sys 0m3.501s 00:09:00.233 19:03:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.233 19:03:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:00.233 ************************************ 00:09:00.233 END TEST nvmf_fused_ordering 00:09:00.233 ************************************ 00:09:00.492 19:03:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:00.492 19:03:40 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:00.492 19:03:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.492 19:03:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.492 19:03:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.492 ************************************ 00:09:00.492 START TEST nvmf_delete_subsystem 00:09:00.492 ************************************ 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:00.492 * Looking for test storage... 00:09:00.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.492 19:03:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.390 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.390 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:09:02.648 00:09:02.648 --- 10.0.0.2 ping statistics --- 00:09:02.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.648 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:09:02.648 00:09:02.648 --- 10.0.0.1 ping statistics --- 00:09:02.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.648 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3240279 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3240279 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3240279 ']' 00:09:02.648 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.649 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.649 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.649 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.649 19:03:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.649 [2024-07-15 19:03:42.934899] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:02.649 [2024-07-15 19:03:42.934986] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.649 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.649 [2024-07-15 19:03:43.002806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.907 [2024-07-15 19:03:43.120296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.907 [2024-07-15 19:03:43.120351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.907 [2024-07-15 19:03:43.120368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.907 [2024-07-15 19:03:43.120381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.907 [2024-07-15 19:03:43.120393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.907 [2024-07-15 19:03:43.120471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.907 [2024-07-15 19:03:43.120478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 [2024-07-15 19:03:43.269835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 [2024-07-15 19:03:43.286078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 NULL1 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 Delay0 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3240428 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:02.907 19:03:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:03.164 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.164 [2024-07-15 19:03:43.360828] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:05.093 19:03:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.093 19:03:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.093 19:03:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 starting I/O failed: -6 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.355 Read completed with error (sct=0, sc=8) 00:09:05.355 Write completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 [2024-07-15 19:03:45.531799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebf3e0 is same with the state(5) to be set 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 starting I/O failed: -6 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 [2024-07-15 19:03:45.532596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6b54000c00 is same with the state(5) to be set 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.356 Write completed with error (sct=0, sc=8) 00:09:05.356 Read completed with error (sct=0, sc=8) 00:09:05.357 Write completed with error (sct=0, sc=8) 00:09:06.292 [2024-07-15 19:03:46.504726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec0ac0 is same with the state(5) to be set 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 [2024-07-15 19:03:46.534077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebf5c0 is same with the state(5) to be set 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 [2024-07-15 19:03:46.534279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xebf980 is same with the state(5) to be set 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 [2024-07-15 19:03:46.534639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6b5400cfe0 is same with the state(5) to be set 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Read completed with error (sct=0, sc=8) 00:09:06.292 Write completed with error (sct=0, sc=8) 00:09:06.292 [2024-07-15 19:03:46.535316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6b5400d600 is same with the state(5) to be set 00:09:06.292 Initializing NVMe Controllers 00:09:06.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.292 Controller IO queue size 128, less than required. 00:09:06.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:06.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:06.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:06.292 Initialization complete. Launching workers. 00:09:06.292 ======================================================== 00:09:06.292 Latency(us) 00:09:06.292 Device Information : IOPS MiB/s Average min max 00:09:06.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.75 0.09 881494.44 700.45 1011122.32 00:09:06.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.86 0.08 913832.26 383.01 1011409.94 00:09:06.292 ======================================================== 00:09:06.292 Total : 338.61 0.17 896952.11 383.01 1011409.94 00:09:06.292 00:09:06.292 [2024-07-15 19:03:46.535846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec0ac0 (9): Bad file descriptor 00:09:06.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:06.292 19:03:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.292 19:03:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:06.292 19:03:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3240428 00:09:06.292 19:03:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3240428 00:09:06.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3240428) - No such process 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3240428 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3240428 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3240428 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.861 [2024-07-15 19:03:47.059377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3240840 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:06.861 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.861 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.861 [2024-07-15 19:03:47.123739] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:07.428 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:07.428 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:07.428 19:03:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:07.698 19:03:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:07.698 19:03:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:07.698 19:03:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.268 19:03:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:08.268 19:03:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:08.268 19:03:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:08.835 19:03:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:08.835 19:03:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:08.835 19:03:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.401 19:03:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.401 19:03:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:09.401 19:03:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.659 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:09.660 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:09.660 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:09.918 Initializing NVMe Controllers 00:09:09.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.918 Controller IO queue size 128, less than required. 00:09:09.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:09.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:09.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:09.919 Initialization complete. Launching workers. 00:09:09.919 ======================================================== 00:09:09.919 Latency(us) 00:09:09.919 Device Information : IOPS MiB/s Average min max 00:09:09.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004215.57 1000222.09 1011554.78 00:09:09.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004208.32 1000194.46 1012838.55 00:09:09.919 ======================================================== 00:09:09.919 Total : 256.00 0.12 1004211.94 1000194.46 1012838.55 00:09:09.919 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3240840 00:09:10.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3240840) - No such process 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3240840 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.177 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.177 rmmod nvme_tcp 00:09:10.436 rmmod nvme_fabrics 00:09:10.436 rmmod nvme_keyring 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3240279 ']' 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3240279 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3240279 ']' 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3240279 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3240279 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3240279' 00:09:10.436 killing process with pid 3240279 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3240279 00:09:10.436 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3240279 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.708 19:03:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.622 19:03:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:12.622 00:09:12.622 real 0m12.300s 00:09:12.622 user 0m27.806s 00:09:12.622 sys 0m2.945s 00:09:12.622 19:03:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.622 19:03:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.622 ************************************ 00:09:12.622 END TEST nvmf_delete_subsystem 00:09:12.622 ************************************ 00:09:12.622 19:03:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:12.622 19:03:53 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:12.622 19:03:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:12.622 19:03:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.622 19:03:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:12.622 ************************************ 00:09:12.622 START TEST nvmf_ns_masking 00:09:12.622 ************************************ 00:09:12.622 19:03:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:12.881 * Looking for test storage... 00:09:12.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f11871c3-ca27-4b93-a9c1-cd0402e75887 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c2c16e1a-cdf9-4458-916d-21abb455563a 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:12.881 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ef0e9350-9430-4873-8e83-b139f19777b6 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:12.882 19:03:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:14.794 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:14.794 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:14.794 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:14.794 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.794 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.795 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:09:15.052 00:09:15.052 --- 10.0.0.2 ping statistics --- 00:09:15.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.052 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:09:15.052 00:09:15.052 --- 10.0.0.1 ping statistics --- 00:09:15.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.052 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.052 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3243184 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3243184 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3243184 ']' 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.053 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:15.053 [2024-07-15 19:03:55.414629] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:15.053 [2024-07-15 19:03:55.414720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.053 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.310 [2024-07-15 19:03:55.485710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.310 [2024-07-15 19:03:55.604035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.310 [2024-07-15 19:03:55.604092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.310 [2024-07-15 19:03:55.604108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.310 [2024-07-15 19:03:55.604122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.310 [2024-07-15 19:03:55.604133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.310 [2024-07-15 19:03:55.604173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.310 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.310 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:15.310 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.310 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.310 19:03:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:15.568 19:03:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.568 19:03:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.825 [2024-07-15 19:03:56.006982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.825 19:03:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:15.825 19:03:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:15.825 19:03:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:16.083 Malloc1 00:09:16.083 19:03:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:16.341 Malloc2 00:09:16.341 19:03:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.599 19:03:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:16.858 19:03:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.116 [2024-07-15 19:03:57.402451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.116 19:03:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:17.116 19:03:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ef0e9350-9430-4873-8e83-b139f19777b6 -a 10.0.0.2 -s 4420 -i 4 00:09:17.374 19:03:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.374 19:03:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:17.374 19:03:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.374 19:03:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:17.374 19:03:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:19.273 [ 0]:0x1 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.273 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:19.531 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e59e044a445244cf96b5a7cc4ed6ccb1 00:09:19.531 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e59e044a445244cf96b5a7cc4ed6ccb1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.531 19:03:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:19.788 [ 0]:0x1 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e59e044a445244cf96b5a7cc4ed6ccb1 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e59e044a445244cf96b5a7cc4ed6ccb1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:19.788 [ 1]:0x2 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fd2d476b21c4fff8c4fe1005bf727b4 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fd2d476b21c4fff8c4fe1005bf727b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.788 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.078 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:20.338 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:20.338 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ef0e9350-9430-4873-8e83-b139f19777b6 -a 10.0.0.2 -s 4420 -i 4 00:09:20.596 19:04:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:20.596 19:04:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:20.596 19:04:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.596 19:04:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:20.596 19:04:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:20.596 19:04:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.492 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.492 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.492 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.492 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.750 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.750 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:22.750 19:04:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:22.750 19:04:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:22.750 19:04:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:22.750 19:04:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:22.751 19:04:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:22.751 [ 0]:0x2 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fd2d476b21c4fff8c4fe1005bf727b4 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fd2d476b21c4fff8c4fe1005bf727b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:22.751 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:23.316 [ 0]:0x1 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e59e044a445244cf96b5a7cc4ed6ccb1 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e59e044a445244cf96b5a7cc4ed6ccb1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:23.316 [ 1]:0x2 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fd2d476b21c4fff8c4fe1005bf727b4 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fd2d476b21c4fff8c4fe1005bf727b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.316 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.574 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:23.574 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:23.575 [ 0]:0x2 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fd2d476b21c4fff8c4fe1005bf727b4 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fd2d476b21c4fff8c4fe1005bf727b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.575 19:04:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:23.833 19:04:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:23.833 19:04:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ef0e9350-9430-4873-8e83-b139f19777b6 -a 10.0.0.2 -s 4420 -i 4 00:09:24.103 19:04:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:24.103 19:04:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:24.103 19:04:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.103 19:04:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:24.104 19:04:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:24.104 19:04:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.014 [ 0]:0x1 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e59e044a445244cf96b5a7cc4ed6ccb1 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e59e044a445244cf96b5a7cc4ed6ccb1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:26.014 [ 1]:0x2 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fd2d476b21c4fff8c4fe1005bf727b4 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fd2d476b21c4fff8c4fe1005bf727b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.014 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.271 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:26.528 [ 0]:0x2 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.528 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fd2d476b21c4fff8c4fe1005bf727b4 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fd2d476b21c4fff8c4fe1005bf727b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:26.529 19:04:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:26.786 [2024-07-15 19:04:07.099814] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:26.786 request: 00:09:26.786 { 00:09:26.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.786 "nsid": 2, 00:09:26.786 "host": "nqn.2016-06.io.spdk:host1", 00:09:26.786 "method": "nvmf_ns_remove_host", 00:09:26.786 "req_id": 1 00:09:26.786 } 00:09:26.786 Got JSON-RPC error response 00:09:26.786 response: 00:09:26.786 { 00:09:26.786 "code": -32602, 00:09:26.786 "message": "Invalid parameters" 00:09:26.786 } 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:26.786 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:27.043 [ 0]:0x2 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fd2d476b21c4fff8c4fe1005bf727b4 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fd2d476b21c4fff8c4fe1005bf727b4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3244806 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3244806 /var/tmp/host.sock 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3244806 ']' 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:27.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.043 19:04:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:27.043 [2024-07-15 19:04:07.405123] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:27.043 [2024-07-15 19:04:07.405238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244806 ] 00:09:27.043 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.043 [2024-07-15 19:04:07.467014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.301 [2024-07-15 19:04:07.585077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.233 19:04:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.233 19:04:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:28.233 19:04:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.233 19:04:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:28.491 19:04:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f11871c3-ca27-4b93-a9c1-cd0402e75887 00:09:28.491 19:04:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:28.491 19:04:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F11871C3CA274B93A9C1CD0402E75887 -i 00:09:29.056 19:04:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c2c16e1a-cdf9-4458-916d-21abb455563a 00:09:29.056 19:04:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:29.056 19:04:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C2C16E1ACDF94458916D21ABB455563A -i 00:09:29.056 19:04:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:29.313 19:04:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:29.571 19:04:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:29.571 19:04:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:29.829 nvme0n1 00:09:30.089 19:04:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:30.089 19:04:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:30.347 nvme1n2 00:09:30.347 19:04:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:30.347 19:04:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:30.347 19:04:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:30.347 19:04:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:30.347 19:04:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:30.604 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:30.604 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:30.604 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:30.604 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:30.862 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f11871c3-ca27-4b93-a9c1-cd0402e75887 == \f\1\1\8\7\1\c\3\-\c\a\2\7\-\4\b\9\3\-\a\9\c\1\-\c\d\0\4\0\2\e\7\5\8\8\7 ]] 00:09:30.862 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:30.862 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:30.862 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:31.120 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c2c16e1a-cdf9-4458-916d-21abb455563a == \c\2\c\1\6\e\1\a\-\c\d\f\9\-\4\4\5\8\-\9\1\6\d\-\2\1\a\b\b\4\5\5\5\6\3\a ]] 00:09:31.120 19:04:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3244806 00:09:31.120 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3244806 ']' 00:09:31.120 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3244806 00:09:31.120 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:31.120 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:31.120 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3244806 00:09:31.380 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:31.380 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:31.380 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3244806' 00:09:31.380 killing process with pid 3244806 00:09:31.380 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3244806 00:09:31.380 19:04:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3244806 00:09:31.639 19:04:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.897 rmmod nvme_tcp 00:09:31.897 rmmod nvme_fabrics 00:09:31.897 rmmod nvme_keyring 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3243184 ']' 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3243184 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3243184 ']' 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3243184 00:09:31.897 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:32.155 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.155 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3243184 00:09:32.155 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.155 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.155 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3243184' 00:09:32.155 killing process with pid 3243184 00:09:32.155 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3243184 00:09:32.155 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3243184 00:09:32.414 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.414 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.414 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.415 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.415 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.415 19:04:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.415 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.415 19:04:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.320 19:04:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:34.320 00:09:34.320 real 0m21.673s 00:09:34.320 user 0m28.745s 00:09:34.320 sys 0m4.207s 00:09:34.320 19:04:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.320 19:04:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:34.320 ************************************ 00:09:34.320 END TEST nvmf_ns_masking 00:09:34.320 ************************************ 00:09:34.320 19:04:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:34.320 19:04:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:34.320 19:04:14 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:34.320 19:04:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:34.320 19:04:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.320 19:04:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.607 ************************************ 00:09:34.607 START TEST nvmf_nvme_cli 00:09:34.607 ************************************ 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:34.607 * Looking for test storage... 00:09:34.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.607 19:04:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:36.510 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:36.510 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:36.510 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:36.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:36.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:36.511 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:36.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:36.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:09:36.769 00:09:36.769 --- 10.0.0.2 ping statistics --- 00:09:36.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.769 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:36.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:36.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:09:36.769 00:09:36.769 --- 10.0.0.1 ping statistics --- 00:09:36.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:36.769 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:36.769 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:36.770 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:36.770 19:04:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3247311 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3247311 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3247311 ']' 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.770 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:36.770 [2024-07-15 19:04:17.059568] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:36.770 [2024-07-15 19:04:17.059656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.770 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.770 [2024-07-15 19:04:17.132445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.028 [2024-07-15 19:04:17.257198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.028 [2024-07-15 19:04:17.257256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.028 [2024-07-15 19:04:17.257272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.028 [2024-07-15 19:04:17.257285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.028 [2024-07-15 19:04:17.257296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.028 [2024-07-15 19:04:17.257378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.028 [2024-07-15 19:04:17.257437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.028 [2024-07-15 19:04:17.257488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.028 [2024-07-15 19:04:17.257491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.028 [2024-07-15 19:04:17.405600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.028 Malloc0 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.028 Malloc1 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:37.028 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.286 [2024-07-15 19:04:17.486709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.286 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:37.286 00:09:37.286 Discovery Log Number of Records 2, Generation counter 2 00:09:37.286 =====Discovery Log Entry 0====== 00:09:37.286 trtype: tcp 00:09:37.286 adrfam: ipv4 00:09:37.286 subtype: current discovery subsystem 00:09:37.286 treq: not required 00:09:37.286 portid: 0 00:09:37.286 trsvcid: 4420 00:09:37.286 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:37.286 traddr: 10.0.0.2 00:09:37.286 eflags: explicit discovery connections, duplicate discovery information 00:09:37.286 sectype: none 00:09:37.286 =====Discovery Log Entry 1====== 00:09:37.286 trtype: tcp 00:09:37.286 adrfam: ipv4 00:09:37.286 subtype: nvme subsystem 00:09:37.287 treq: not required 00:09:37.287 portid: 0 00:09:37.287 trsvcid: 4420 00:09:37.287 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:37.287 traddr: 10.0.0.2 00:09:37.287 eflags: none 00:09:37.287 sectype: none 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:37.287 19:04:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.852 19:04:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:37.852 19:04:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.852 19:04:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.852 19:04:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:37.852 19:04:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:37.852 19:04:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:40.379 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:40.380 /dev/nvme0n1 ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.380 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.380 rmmod nvme_tcp 00:09:40.380 rmmod nvme_fabrics 00:09:40.380 rmmod nvme_keyring 00:09:40.638 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.638 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3247311 ']' 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3247311 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3247311 ']' 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3247311 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3247311 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3247311' 00:09:40.639 killing process with pid 3247311 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3247311 00:09:40.639 19:04:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3247311 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.897 19:04:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.430 19:04:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:43.430 00:09:43.430 real 0m8.473s 00:09:43.430 user 0m16.101s 00:09:43.430 sys 0m2.210s 00:09:43.430 19:04:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.430 19:04:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.430 ************************************ 00:09:43.430 END TEST nvmf_nvme_cli 00:09:43.430 ************************************ 00:09:43.430 19:04:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:43.430 19:04:23 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:43.430 19:04:23 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:43.430 19:04:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.430 19:04:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.430 19:04:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.430 ************************************ 00:09:43.430 START TEST nvmf_vfio_user 00:09:43.430 ************************************ 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:43.430 * Looking for test storage... 00:09:43.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:43.430 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3248236 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3248236' 00:09:43.431 Process pid: 3248236 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3248236 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3248236 ']' 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 [2024-07-15 19:04:23.424765] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:43.431 [2024-07-15 19:04:23.424864] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.431 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.431 [2024-07-15 19:04:23.484156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.431 [2024-07-15 19:04:23.591134] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.431 [2024-07-15 19:04:23.591196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.431 [2024-07-15 19:04:23.591224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.431 [2024-07-15 19:04:23.591236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.431 [2024-07-15 19:04:23.591246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.431 [2024-07-15 19:04:23.591311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.431 [2024-07-15 19:04:23.591404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.431 [2024-07-15 19:04:23.591456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.431 [2024-07-15 19:04:23.591454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:43.431 19:04:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:44.363 19:04:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:44.620 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:44.620 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:44.620 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:44.620 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:44.620 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:44.878 Malloc1 00:09:45.135 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:45.135 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:45.391 19:04:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:45.649 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:45.649 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:45.649 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:46.213 Malloc2 00:09:46.213 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:46.213 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:46.471 19:04:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:46.728 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:46.728 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:46.728 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:46.728 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:46.728 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:46.728 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:46.728 [2024-07-15 19:04:27.113965] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:09:46.728 [2024-07-15 19:04:27.114015] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248662 ] 00:09:46.728 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.728 [2024-07-15 19:04:27.147233] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:46.728 [2024-07-15 19:04:27.156355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:46.728 [2024-07-15 19:04:27.156387] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0f0979e000 00:09:46.728 [2024-07-15 19:04:27.157346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:46.728 [2024-07-15 19:04:27.158346] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:46.728 [2024-07-15 19:04:27.159352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:46.987 [2024-07-15 19:04:27.160355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:46.987 [2024-07-15 19:04:27.161364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:46.987 [2024-07-15 19:04:27.162363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:46.987 [2024-07-15 19:04:27.163368] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:46.987 [2024-07-15 19:04:27.164373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:46.987 [2024-07-15 19:04:27.165378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:46.987 [2024-07-15 19:04:27.165397] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0f09793000 00:09:46.987 [2024-07-15 19:04:27.166543] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:46.987 [2024-07-15 19:04:27.181461] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:46.987 [2024-07-15 19:04:27.181498] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:46.987 [2024-07-15 19:04:27.186491] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:46.987 [2024-07-15 19:04:27.186541] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:46.987 [2024-07-15 19:04:27.186632] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:46.987 [2024-07-15 19:04:27.186659] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:46.987 [2024-07-15 19:04:27.186669] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:46.987 [2024-07-15 19:04:27.187481] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:46.987 [2024-07-15 19:04:27.187501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:46.987 [2024-07-15 19:04:27.187514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:46.987 [2024-07-15 19:04:27.188484] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:46.987 [2024-07-15 19:04:27.188502] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:46.987 [2024-07-15 19:04:27.188515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:46.987 [2024-07-15 19:04:27.189494] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:46.987 [2024-07-15 19:04:27.189512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:46.987 [2024-07-15 19:04:27.190502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:46.987 [2024-07-15 19:04:27.190522] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:46.987 [2024-07-15 19:04:27.190538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:46.987 [2024-07-15 19:04:27.190550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:46.987 [2024-07-15 19:04:27.190663] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:46.987 [2024-07-15 19:04:27.190672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:46.987 [2024-07-15 19:04:27.190680] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:46.987 [2024-07-15 19:04:27.191512] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:46.987 [2024-07-15 19:04:27.195892] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:46.987 [2024-07-15 19:04:27.196539] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:46.987 [2024-07-15 19:04:27.197530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:46.987 [2024-07-15 19:04:27.197640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:46.987 [2024-07-15 19:04:27.198546] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:46.987 [2024-07-15 19:04:27.198564] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:46.987 [2024-07-15 19:04:27.198573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.198597] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:46.987 [2024-07-15 19:04:27.198610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.198633] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:46.987 [2024-07-15 19:04:27.198643] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:46.987 [2024-07-15 19:04:27.198661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:46.987 [2024-07-15 19:04:27.198717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:46.987 [2024-07-15 19:04:27.198733] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:46.987 [2024-07-15 19:04:27.198745] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:46.987 [2024-07-15 19:04:27.198753] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:46.987 [2024-07-15 19:04:27.198760] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:46.987 [2024-07-15 19:04:27.198768] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:46.987 [2024-07-15 19:04:27.198775] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:46.987 [2024-07-15 19:04:27.198782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.198794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.198813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:46.987 [2024-07-15 19:04:27.198828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:46.987 [2024-07-15 19:04:27.198851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:46.987 [2024-07-15 19:04:27.198889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:46.987 [2024-07-15 19:04:27.198904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:46.987 [2024-07-15 19:04:27.198927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:46.987 [2024-07-15 19:04:27.198935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.198951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.198967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:46.987 [2024-07-15 19:04:27.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:46.987 [2024-07-15 19:04:27.198991] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:46.987 [2024-07-15 19:04:27.198999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.199010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.199020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.199033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:46.987 [2024-07-15 19:04:27.199048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:46.987 [2024-07-15 19:04:27.199113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.199129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:46.987 [2024-07-15 19:04:27.199142] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:46.987 [2024-07-15 19:04:27.199151] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:46.987 [2024-07-15 19:04:27.199176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:46.987 [2024-07-15 19:04:27.199191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199208] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:46.988 [2024-07-15 19:04:27.199243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199274] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:46.988 [2024-07-15 19:04:27.199284] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:46.988 [2024-07-15 19:04:27.199293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199366] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:46.988 [2024-07-15 19:04:27.199375] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:46.988 [2024-07-15 19:04:27.199384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199446] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199469] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:46.988 [2024-07-15 19:04:27.199476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:46.988 [2024-07-15 19:04:27.199484] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:46.988 [2024-07-15 19:04:27.199509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199646] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:46.988 [2024-07-15 19:04:27.199656] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:46.988 [2024-07-15 19:04:27.199662] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:46.988 [2024-07-15 19:04:27.199668] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:46.988 [2024-07-15 19:04:27.199677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:46.988 [2024-07-15 19:04:27.199689] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:46.988 [2024-07-15 19:04:27.199697] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:46.988 [2024-07-15 19:04:27.199705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199716] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:46.988 [2024-07-15 19:04:27.199724] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:46.988 [2024-07-15 19:04:27.199732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199744] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:46.988 [2024-07-15 19:04:27.199752] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:46.988 [2024-07-15 19:04:27.199760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.199822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:46.988 ===================================================== 00:09:46.988 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:46.988 ===================================================== 00:09:46.988 Controller Capabilities/Features 00:09:46.988 ================================ 00:09:46.988 Vendor ID: 4e58 00:09:46.988 Subsystem Vendor ID: 4e58 00:09:46.988 Serial Number: SPDK1 00:09:46.988 Model Number: SPDK bdev Controller 00:09:46.988 Firmware Version: 24.09 00:09:46.988 Recommended Arb Burst: 6 00:09:46.988 IEEE OUI Identifier: 8d 6b 50 00:09:46.988 Multi-path I/O 00:09:46.988 May have multiple subsystem ports: Yes 00:09:46.988 May have multiple controllers: Yes 00:09:46.988 Associated with SR-IOV VF: No 00:09:46.988 Max Data Transfer Size: 131072 00:09:46.988 Max Number of Namespaces: 32 00:09:46.988 Max Number of I/O Queues: 127 00:09:46.988 NVMe Specification Version (VS): 1.3 00:09:46.988 NVMe Specification Version (Identify): 1.3 00:09:46.988 Maximum Queue Entries: 256 00:09:46.988 Contiguous Queues Required: Yes 00:09:46.988 Arbitration Mechanisms Supported 00:09:46.988 Weighted Round Robin: Not Supported 00:09:46.988 Vendor Specific: Not Supported 00:09:46.988 Reset Timeout: 15000 ms 00:09:46.988 Doorbell Stride: 4 bytes 00:09:46.988 NVM Subsystem Reset: Not Supported 00:09:46.988 Command Sets Supported 00:09:46.988 NVM Command Set: Supported 00:09:46.988 Boot Partition: Not Supported 00:09:46.988 Memory Page Size Minimum: 4096 bytes 00:09:46.988 Memory Page Size Maximum: 4096 bytes 00:09:46.988 Persistent Memory Region: Not Supported 00:09:46.988 Optional Asynchronous Events Supported 00:09:46.988 Namespace Attribute Notices: Supported 00:09:46.988 Firmware Activation Notices: Not Supported 00:09:46.988 ANA Change Notices: Not Supported 00:09:46.988 PLE Aggregate Log Change Notices: Not Supported 00:09:46.988 LBA Status Info Alert Notices: Not Supported 00:09:46.988 EGE Aggregate Log Change Notices: Not Supported 00:09:46.988 Normal NVM Subsystem Shutdown event: Not Supported 00:09:46.988 Zone Descriptor Change Notices: Not Supported 00:09:46.988 Discovery Log Change Notices: Not Supported 00:09:46.988 Controller Attributes 00:09:46.988 128-bit Host Identifier: Supported 00:09:46.988 Non-Operational Permissive Mode: Not Supported 00:09:46.988 NVM Sets: Not Supported 00:09:46.988 Read Recovery Levels: Not Supported 00:09:46.988 Endurance Groups: Not Supported 00:09:46.988 Predictable Latency Mode: Not Supported 00:09:46.988 Traffic Based Keep ALive: Not Supported 00:09:46.988 Namespace Granularity: Not Supported 00:09:46.988 SQ Associations: Not Supported 00:09:46.988 UUID List: Not Supported 00:09:46.988 Multi-Domain Subsystem: Not Supported 00:09:46.988 Fixed Capacity Management: Not Supported 00:09:46.988 Variable Capacity Management: Not Supported 00:09:46.988 Delete Endurance Group: Not Supported 00:09:46.988 Delete NVM Set: Not Supported 00:09:46.988 Extended LBA Formats Supported: Not Supported 00:09:46.988 Flexible Data Placement Supported: Not Supported 00:09:46.988 00:09:46.988 Controller Memory Buffer Support 00:09:46.988 ================================ 00:09:46.988 Supported: No 00:09:46.988 00:09:46.988 Persistent Memory Region Support 00:09:46.988 ================================ 00:09:46.988 Supported: No 00:09:46.988 00:09:46.988 Admin Command Set Attributes 00:09:46.988 ============================ 00:09:46.988 Security Send/Receive: Not Supported 00:09:46.988 Format NVM: Not Supported 00:09:46.988 Firmware Activate/Download: Not Supported 00:09:46.988 Namespace Management: Not Supported 00:09:46.988 Device Self-Test: Not Supported 00:09:46.988 Directives: Not Supported 00:09:46.988 NVMe-MI: Not Supported 00:09:46.988 Virtualization Management: Not Supported 00:09:46.988 Doorbell Buffer Config: Not Supported 00:09:46.988 Get LBA Status Capability: Not Supported 00:09:46.988 Command & Feature Lockdown Capability: Not Supported 00:09:46.988 Abort Command Limit: 4 00:09:46.988 Async Event Request Limit: 4 00:09:46.988 Number of Firmware Slots: N/A 00:09:46.988 Firmware Slot 1 Read-Only: N/A 00:09:46.988 Firmware Activation Without Reset: N/A 00:09:46.988 Multiple Update Detection Support: N/A 00:09:46.988 Firmware Update Granularity: No Information Provided 00:09:46.988 Per-Namespace SMART Log: No 00:09:46.988 Asymmetric Namespace Access Log Page: Not Supported 00:09:46.988 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:46.988 Command Effects Log Page: Supported 00:09:46.988 Get Log Page Extended Data: Supported 00:09:46.988 Telemetry Log Pages: Not Supported 00:09:46.988 Persistent Event Log Pages: Not Supported 00:09:46.988 Supported Log Pages Log Page: May Support 00:09:46.988 Commands Supported & Effects Log Page: Not Supported 00:09:46.988 Feature Identifiers & Effects Log Page:May Support 00:09:46.988 NVMe-MI Commands & Effects Log Page: May Support 00:09:46.988 Data Area 4 for Telemetry Log: Not Supported 00:09:46.988 Error Log Page Entries Supported: 128 00:09:46.988 Keep Alive: Supported 00:09:46.988 Keep Alive Granularity: 10000 ms 00:09:46.988 00:09:46.988 NVM Command Set Attributes 00:09:46.988 ========================== 00:09:46.988 Submission Queue Entry Size 00:09:46.988 Max: 64 00:09:46.988 Min: 64 00:09:46.988 Completion Queue Entry Size 00:09:46.988 Max: 16 00:09:46.988 Min: 16 00:09:46.988 Number of Namespaces: 32 00:09:46.988 Compare Command: Supported 00:09:46.988 Write Uncorrectable Command: Not Supported 00:09:46.988 Dataset Management Command: Supported 00:09:46.988 Write Zeroes Command: Supported 00:09:46.988 Set Features Save Field: Not Supported 00:09:46.988 Reservations: Not Supported 00:09:46.988 Timestamp: Not Supported 00:09:46.988 Copy: Supported 00:09:46.988 Volatile Write Cache: Present 00:09:46.988 Atomic Write Unit (Normal): 1 00:09:46.988 Atomic Write Unit (PFail): 1 00:09:46.988 Atomic Compare & Write Unit: 1 00:09:46.988 Fused Compare & Write: Supported 00:09:46.988 Scatter-Gather List 00:09:46.988 SGL Command Set: Supported (Dword aligned) 00:09:46.988 SGL Keyed: Not Supported 00:09:46.988 SGL Bit Bucket Descriptor: Not Supported 00:09:46.988 SGL Metadata Pointer: Not Supported 00:09:46.988 Oversized SGL: Not Supported 00:09:46.988 SGL Metadata Address: Not Supported 00:09:46.988 SGL Offset: Not Supported 00:09:46.988 Transport SGL Data Block: Not Supported 00:09:46.988 Replay Protected Memory Block: Not Supported 00:09:46.988 00:09:46.988 Firmware Slot Information 00:09:46.988 ========================= 00:09:46.988 Active slot: 1 00:09:46.988 Slot 1 Firmware Revision: 24.09 00:09:46.988 00:09:46.988 00:09:46.988 Commands Supported and Effects 00:09:46.988 ============================== 00:09:46.988 Admin Commands 00:09:46.988 -------------- 00:09:46.988 Get Log Page (02h): Supported 00:09:46.988 Identify (06h): Supported 00:09:46.988 Abort (08h): Supported 00:09:46.988 Set Features (09h): Supported 00:09:46.988 Get Features (0Ah): Supported 00:09:46.988 Asynchronous Event Request (0Ch): Supported 00:09:46.988 Keep Alive (18h): Supported 00:09:46.988 I/O Commands 00:09:46.988 ------------ 00:09:46.988 Flush (00h): Supported LBA-Change 00:09:46.988 Write (01h): Supported LBA-Change 00:09:46.988 Read (02h): Supported 00:09:46.988 Compare (05h): Supported 00:09:46.988 Write Zeroes (08h): Supported LBA-Change 00:09:46.988 Dataset Management (09h): Supported LBA-Change 00:09:46.988 Copy (19h): Supported LBA-Change 00:09:46.988 00:09:46.988 Error Log 00:09:46.988 ========= 00:09:46.988 00:09:46.988 Arbitration 00:09:46.988 =========== 00:09:46.988 Arbitration Burst: 1 00:09:46.988 00:09:46.988 Power Management 00:09:46.988 ================ 00:09:46.988 Number of Power States: 1 00:09:46.988 Current Power State: Power State #0 00:09:46.988 Power State #0: 00:09:46.988 Max Power: 0.00 W 00:09:46.988 Non-Operational State: Operational 00:09:46.988 Entry Latency: Not Reported 00:09:46.988 Exit Latency: Not Reported 00:09:46.988 Relative Read Throughput: 0 00:09:46.988 Relative Read Latency: 0 00:09:46.988 Relative Write Throughput: 0 00:09:46.988 Relative Write Latency: 0 00:09:46.988 Idle Power: Not Reported 00:09:46.988 Active Power: Not Reported 00:09:46.988 Non-Operational Permissive Mode: Not Supported 00:09:46.988 00:09:46.988 Health Information 00:09:46.988 ================== 00:09:46.988 Critical Warnings: 00:09:46.988 Available Spare Space: OK 00:09:46.988 Temperature: OK 00:09:46.988 Device Reliability: OK 00:09:46.988 Read Only: No 00:09:46.988 Volatile Memory Backup: OK 00:09:46.988 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:46.988 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:46.988 Available Spare: 0% 00:09:46.988 Available Sp[2024-07-15 19:04:27.199974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:46.988 [2024-07-15 19:04:27.199992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:46.988 [2024-07-15 19:04:27.200041] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:46.988 [2024-07-15 19:04:27.200060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.989 [2024-07-15 19:04:27.200072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.989 [2024-07-15 19:04:27.200083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.989 [2024-07-15 19:04:27.200093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:46.989 [2024-07-15 19:04:27.200555] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:46.989 [2024-07-15 19:04:27.200579] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:46.989 [2024-07-15 19:04:27.201552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:46.989 [2024-07-15 19:04:27.201646] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:46.989 [2024-07-15 19:04:27.201660] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:46.989 [2024-07-15 19:04:27.202560] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:46.989 [2024-07-15 19:04:27.202591] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:46.989 [2024-07-15 19:04:27.202644] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:46.989 [2024-07-15 19:04:27.204600] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:46.989 are Threshold: 0% 00:09:46.989 Life Percentage Used: 0% 00:09:46.989 Data Units Read: 0 00:09:46.989 Data Units Written: 0 00:09:46.989 Host Read Commands: 0 00:09:46.989 Host Write Commands: 0 00:09:46.989 Controller Busy Time: 0 minutes 00:09:46.989 Power Cycles: 0 00:09:46.989 Power On Hours: 0 hours 00:09:46.989 Unsafe Shutdowns: 0 00:09:46.989 Unrecoverable Media Errors: 0 00:09:46.989 Lifetime Error Log Entries: 0 00:09:46.989 Warning Temperature Time: 0 minutes 00:09:46.989 Critical Temperature Time: 0 minutes 00:09:46.989 00:09:46.989 Number of Queues 00:09:46.989 ================ 00:09:46.989 Number of I/O Submission Queues: 127 00:09:46.989 Number of I/O Completion Queues: 127 00:09:46.989 00:09:46.989 Active Namespaces 00:09:46.989 ================= 00:09:46.989 Namespace ID:1 00:09:46.989 Error Recovery Timeout: Unlimited 00:09:46.989 Command Set Identifier: NVM (00h) 00:09:46.989 Deallocate: Supported 00:09:46.989 Deallocated/Unwritten Error: Not Supported 00:09:46.989 Deallocated Read Value: Unknown 00:09:46.989 Deallocate in Write Zeroes: Not Supported 00:09:46.989 Deallocated Guard Field: 0xFFFF 00:09:46.989 Flush: Supported 00:09:46.989 Reservation: Supported 00:09:46.989 Namespace Sharing Capabilities: Multiple Controllers 00:09:46.989 Size (in LBAs): 131072 (0GiB) 00:09:46.989 Capacity (in LBAs): 131072 (0GiB) 00:09:46.989 Utilization (in LBAs): 131072 (0GiB) 00:09:46.989 NGUID: 88006F59288C4DBD8F3BBC3FA75F4B09 00:09:46.989 UUID: 88006f59-288c-4dbd-8f3b-bc3fa75f4b09 00:09:46.989 Thin Provisioning: Not Supported 00:09:46.989 Per-NS Atomic Units: Yes 00:09:46.989 Atomic Boundary Size (Normal): 0 00:09:46.989 Atomic Boundary Size (PFail): 0 00:09:46.989 Atomic Boundary Offset: 0 00:09:46.989 Maximum Single Source Range Length: 65535 00:09:46.989 Maximum Copy Length: 65535 00:09:46.989 Maximum Source Range Count: 1 00:09:46.989 NGUID/EUI64 Never Reused: No 00:09:46.989 Namespace Write Protected: No 00:09:46.989 Number of LBA Formats: 1 00:09:46.989 Current LBA Format: LBA Format #00 00:09:46.989 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:46.989 00:09:46.989 19:04:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:46.989 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.246 [2024-07-15 19:04:27.433697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.514 Initializing NVMe Controllers 00:09:52.514 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:52.514 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:52.514 Initialization complete. Launching workers. 00:09:52.514 ======================================================== 00:09:52.514 Latency(us) 00:09:52.514 Device Information : IOPS MiB/s Average min max 00:09:52.514 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34546.98 134.95 3706.37 1156.77 8303.58 00:09:52.514 ======================================================== 00:09:52.514 Total : 34546.98 134.95 3706.37 1156.77 8303.58 00:09:52.514 00:09:52.514 [2024-07-15 19:04:32.457225] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.514 19:04:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:52.514 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.514 [2024-07-15 19:04:32.702398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:57.830 Initializing NVMe Controllers 00:09:57.830 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:57.830 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:57.830 Initialization complete. Launching workers. 00:09:57.830 ======================================================== 00:09:57.830 Latency(us) 00:09:57.830 Device Information : IOPS MiB/s Average min max 00:09:57.830 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15954.02 62.32 8028.33 5981.03 15975.25 00:09:57.830 ======================================================== 00:09:57.830 Total : 15954.02 62.32 8028.33 5981.03 15975.25 00:09:57.830 00:09:57.830 [2024-07-15 19:04:37.741026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:57.830 19:04:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:57.830 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.830 [2024-07-15 19:04:37.959055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:03.119 [2024-07-15 19:04:43.038242] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:03.119 Initializing NVMe Controllers 00:10:03.119 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:03.119 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:03.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:03.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:03.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:03.119 Initialization complete. Launching workers. 00:10:03.119 Starting thread on core 2 00:10:03.119 Starting thread on core 3 00:10:03.119 Starting thread on core 1 00:10:03.119 19:04:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:03.119 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.119 [2024-07-15 19:04:43.348373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:06.403 [2024-07-15 19:04:46.409688] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:06.403 Initializing NVMe Controllers 00:10:06.403 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:06.403 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:06.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:06.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:06.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:06.403 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:06.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:06.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:06.404 Initialization complete. Launching workers. 00:10:06.404 Starting thread on core 1 with urgent priority queue 00:10:06.404 Starting thread on core 2 with urgent priority queue 00:10:06.404 Starting thread on core 3 with urgent priority queue 00:10:06.404 Starting thread on core 0 with urgent priority queue 00:10:06.404 SPDK bdev Controller (SPDK1 ) core 0: 4847.67 IO/s 20.63 secs/100000 ios 00:10:06.404 SPDK bdev Controller (SPDK1 ) core 1: 4584.33 IO/s 21.81 secs/100000 ios 00:10:06.404 SPDK bdev Controller (SPDK1 ) core 2: 4822.33 IO/s 20.74 secs/100000 ios 00:10:06.404 SPDK bdev Controller (SPDK1 ) core 3: 5004.00 IO/s 19.98 secs/100000 ios 00:10:06.404 ======================================================== 00:10:06.404 00:10:06.404 19:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:06.404 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.404 [2024-07-15 19:04:46.711317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:06.404 Initializing NVMe Controllers 00:10:06.404 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:06.404 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:06.404 Namespace ID: 1 size: 0GB 00:10:06.404 Initialization complete. 00:10:06.404 INFO: using host memory buffer for IO 00:10:06.404 Hello world! 00:10:06.404 [2024-07-15 19:04:46.742776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:06.404 19:04:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:06.404 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.663 [2024-07-15 19:04:47.021350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:08.040 Initializing NVMe Controllers 00:10:08.040 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:08.040 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:08.040 Initialization complete. Launching workers. 00:10:08.040 submit (in ns) avg, min, max = 6884.1, 3492.2, 4017064.4 00:10:08.040 complete (in ns) avg, min, max = 25267.3, 2060.0, 7007045.6 00:10:08.040 00:10:08.040 Submit histogram 00:10:08.040 ================ 00:10:08.040 Range in us Cumulative Count 00:10:08.040 3.484 - 3.508: 0.0663% ( 9) 00:10:08.040 3.508 - 3.532: 0.2284% ( 22) 00:10:08.040 3.532 - 3.556: 1.0976% ( 118) 00:10:08.040 3.556 - 3.579: 3.4328% ( 317) 00:10:08.040 3.579 - 3.603: 7.9926% ( 619) 00:10:08.040 3.603 - 3.627: 14.5267% ( 887) 00:10:08.040 3.627 - 3.650: 23.6169% ( 1234) 00:10:08.040 3.650 - 3.674: 32.8987% ( 1260) 00:10:08.040 3.674 - 3.698: 41.8785% ( 1219) 00:10:08.040 3.698 - 3.721: 50.2247% ( 1133) 00:10:08.040 3.721 - 3.745: 55.8748% ( 767) 00:10:08.040 3.745 - 3.769: 60.9429% ( 688) 00:10:08.040 3.769 - 3.793: 64.9503% ( 544) 00:10:08.040 3.793 - 3.816: 68.8250% ( 526) 00:10:08.040 3.816 - 3.840: 72.0516% ( 438) 00:10:08.040 3.840 - 3.864: 75.5875% ( 480) 00:10:08.040 3.864 - 3.887: 79.1381% ( 482) 00:10:08.040 3.887 - 3.911: 82.4751% ( 453) 00:10:08.040 3.911 - 3.935: 85.4733% ( 407) 00:10:08.040 3.935 - 3.959: 87.7495% ( 309) 00:10:08.040 3.959 - 3.982: 89.3407% ( 216) 00:10:08.040 3.982 - 4.006: 91.0424% ( 231) 00:10:08.040 4.006 - 4.030: 92.2136% ( 159) 00:10:08.040 4.030 - 4.053: 93.3481% ( 154) 00:10:08.040 4.053 - 4.077: 94.3794% ( 140) 00:10:08.040 4.077 - 4.101: 95.1308% ( 102) 00:10:08.040 4.101 - 4.124: 95.5580% ( 58) 00:10:08.040 4.124 - 4.148: 95.9190% ( 49) 00:10:08.040 4.148 - 4.172: 96.2136% ( 40) 00:10:08.040 4.172 - 4.196: 96.4641% ( 34) 00:10:08.040 4.196 - 4.219: 96.6777% ( 29) 00:10:08.040 4.219 - 4.243: 96.7808% ( 14) 00:10:08.040 4.243 - 4.267: 96.8619% ( 11) 00:10:08.040 4.267 - 4.290: 96.9650% ( 14) 00:10:08.040 4.290 - 4.314: 97.0976% ( 18) 00:10:08.040 4.314 - 4.338: 97.1713% ( 10) 00:10:08.040 4.338 - 4.361: 97.2597% ( 12) 00:10:08.040 4.361 - 4.385: 97.3039% ( 6) 00:10:08.040 4.385 - 4.409: 97.3554% ( 7) 00:10:08.040 4.409 - 4.433: 97.4365% ( 11) 00:10:08.040 4.433 - 4.456: 97.4733% ( 5) 00:10:08.040 4.456 - 4.480: 97.4954% ( 3) 00:10:08.040 4.480 - 4.504: 97.5175% ( 3) 00:10:08.040 4.504 - 4.527: 97.5470% ( 4) 00:10:08.040 4.551 - 4.575: 97.5617% ( 2) 00:10:08.040 4.646 - 4.670: 97.5691% ( 1) 00:10:08.040 4.693 - 4.717: 97.5764% ( 1) 00:10:08.040 4.717 - 4.741: 97.6206% ( 6) 00:10:08.040 4.741 - 4.764: 97.6501% ( 4) 00:10:08.040 4.764 - 4.788: 97.6869% ( 5) 00:10:08.040 4.788 - 4.812: 97.7164% ( 4) 00:10:08.040 4.812 - 4.836: 97.7680% ( 7) 00:10:08.040 4.836 - 4.859: 97.8637% ( 13) 00:10:08.040 4.859 - 4.883: 97.9227% ( 8) 00:10:08.040 4.883 - 4.907: 97.9374% ( 2) 00:10:08.040 4.907 - 4.930: 97.9816% ( 6) 00:10:08.040 4.930 - 4.954: 98.0258% ( 6) 00:10:08.040 4.954 - 4.978: 98.0700% ( 6) 00:10:08.040 4.978 - 5.001: 98.1068% ( 5) 00:10:08.040 5.001 - 5.025: 98.1731% ( 9) 00:10:08.040 5.025 - 5.049: 98.2099% ( 5) 00:10:08.040 5.049 - 5.073: 98.2320% ( 3) 00:10:08.040 5.073 - 5.096: 98.2468% ( 2) 00:10:08.040 5.096 - 5.120: 98.2689% ( 3) 00:10:08.040 5.120 - 5.144: 98.2983% ( 4) 00:10:08.040 5.144 - 5.167: 98.3057% ( 1) 00:10:08.040 5.167 - 5.191: 98.3278% ( 3) 00:10:08.040 5.191 - 5.215: 98.3573% ( 4) 00:10:08.040 5.215 - 5.239: 98.3646% ( 1) 00:10:08.040 5.239 - 5.262: 98.3720% ( 1) 00:10:08.040 5.262 - 5.286: 98.3794% ( 1) 00:10:08.040 5.286 - 5.310: 98.3941% ( 2) 00:10:08.040 5.333 - 5.357: 98.4015% ( 1) 00:10:08.040 5.428 - 5.452: 98.4162% ( 2) 00:10:08.040 5.570 - 5.594: 98.4309% ( 2) 00:10:08.040 5.618 - 5.641: 98.4383% ( 1) 00:10:08.040 5.641 - 5.665: 98.4457% ( 1) 00:10:08.040 5.665 - 5.689: 98.4530% ( 1) 00:10:08.040 5.689 - 5.713: 98.4604% ( 1) 00:10:08.040 5.926 - 5.950: 98.4751% ( 2) 00:10:08.040 5.950 - 5.973: 98.4825% ( 1) 00:10:08.040 6.305 - 6.353: 98.4899% ( 1) 00:10:08.040 6.353 - 6.400: 98.5046% ( 2) 00:10:08.040 6.495 - 6.542: 98.5120% ( 1) 00:10:08.040 6.542 - 6.590: 98.5193% ( 1) 00:10:08.040 6.874 - 6.921: 98.5267% ( 1) 00:10:08.040 7.016 - 7.064: 98.5341% ( 1) 00:10:08.040 7.064 - 7.111: 98.5414% ( 1) 00:10:08.040 7.206 - 7.253: 98.5488% ( 1) 00:10:08.040 7.253 - 7.301: 98.5635% ( 2) 00:10:08.040 7.490 - 7.538: 98.5709% ( 1) 00:10:08.040 7.538 - 7.585: 98.5783% ( 1) 00:10:08.040 7.585 - 7.633: 98.5856% ( 1) 00:10:08.040 7.633 - 7.680: 98.5930% ( 1) 00:10:08.040 7.680 - 7.727: 98.6004% ( 1) 00:10:08.040 7.822 - 7.870: 98.6077% ( 1) 00:10:08.040 8.154 - 8.201: 98.6151% ( 1) 00:10:08.040 8.201 - 8.249: 98.6298% ( 2) 00:10:08.040 8.391 - 8.439: 98.6372% ( 1) 00:10:08.040 8.439 - 8.486: 98.6519% ( 2) 00:10:08.040 8.676 - 8.723: 98.6593% ( 1) 00:10:08.040 8.865 - 8.913: 98.6667% ( 1) 00:10:08.040 9.055 - 9.102: 98.6740% ( 1) 00:10:08.040 9.102 - 9.150: 98.6814% ( 1) 00:10:08.040 9.150 - 9.197: 98.6961% ( 2) 00:10:08.040 9.292 - 9.339: 98.7035% ( 1) 00:10:08.040 9.434 - 9.481: 98.7109% ( 1) 00:10:08.040 9.576 - 9.624: 98.7182% ( 1) 00:10:08.040 9.671 - 9.719: 98.7330% ( 2) 00:10:08.040 9.719 - 9.766: 98.7403% ( 1) 00:10:08.040 9.766 - 9.813: 98.7551% ( 2) 00:10:08.040 9.813 - 9.861: 98.7845% ( 4) 00:10:08.040 9.908 - 9.956: 98.7919% ( 1) 00:10:08.040 9.956 - 10.003: 98.7993% ( 1) 00:10:08.040 10.287 - 10.335: 98.8066% ( 1) 00:10:08.040 10.382 - 10.430: 98.8140% ( 1) 00:10:08.040 10.430 - 10.477: 98.8287% ( 2) 00:10:08.040 10.477 - 10.524: 98.8361% ( 1) 00:10:08.040 10.667 - 10.714: 98.8435% ( 1) 00:10:08.040 10.714 - 10.761: 98.8508% ( 1) 00:10:08.040 10.809 - 10.856: 98.8582% ( 1) 00:10:08.040 10.904 - 10.951: 98.8656% ( 1) 00:10:08.040 10.999 - 11.046: 98.8729% ( 1) 00:10:08.040 11.188 - 11.236: 98.8803% ( 1) 00:10:08.040 11.330 - 11.378: 98.8877% ( 1) 00:10:08.040 11.473 - 11.520: 98.8950% ( 1) 00:10:08.040 11.662 - 11.710: 98.9024% ( 1) 00:10:08.040 11.710 - 11.757: 98.9098% ( 1) 00:10:08.040 12.136 - 12.231: 98.9171% ( 1) 00:10:08.040 12.231 - 12.326: 98.9245% ( 1) 00:10:08.040 12.326 - 12.421: 98.9319% ( 1) 00:10:08.040 12.705 - 12.800: 98.9392% ( 1) 00:10:08.040 12.800 - 12.895: 98.9466% ( 1) 00:10:08.040 12.895 - 12.990: 98.9540% ( 1) 00:10:08.040 13.369 - 13.464: 98.9613% ( 1) 00:10:08.040 13.559 - 13.653: 98.9687% ( 1) 00:10:08.040 13.843 - 13.938: 98.9834% ( 2) 00:10:08.040 13.938 - 14.033: 98.9908% ( 1) 00:10:08.040 14.033 - 14.127: 98.9982% ( 1) 00:10:08.040 14.412 - 14.507: 99.0129% ( 2) 00:10:08.040 15.170 - 15.265: 99.0203% ( 1) 00:10:08.040 16.213 - 16.308: 99.0276% ( 1) 00:10:08.040 16.877 - 16.972: 99.0350% ( 1) 00:10:08.040 16.972 - 17.067: 99.0424% ( 1) 00:10:08.040 17.161 - 17.256: 99.0497% ( 1) 00:10:08.040 17.256 - 17.351: 99.0571% ( 1) 00:10:08.040 17.351 - 17.446: 99.0792% ( 3) 00:10:08.040 17.446 - 17.541: 99.1013% ( 3) 00:10:08.040 17.541 - 17.636: 99.1234% ( 3) 00:10:08.040 17.636 - 17.730: 99.1823% ( 8) 00:10:08.040 17.730 - 17.825: 99.2339% ( 7) 00:10:08.040 17.825 - 17.920: 99.2855% ( 7) 00:10:08.040 17.920 - 18.015: 99.3444% ( 8) 00:10:08.040 18.015 - 18.110: 99.3738% ( 4) 00:10:08.040 18.110 - 18.204: 99.4107% ( 5) 00:10:08.040 18.204 - 18.299: 99.4843% ( 10) 00:10:08.040 18.299 - 18.394: 99.5433% ( 8) 00:10:08.040 18.394 - 18.489: 99.6390% ( 13) 00:10:08.040 18.489 - 18.584: 99.7053% ( 9) 00:10:08.040 18.584 - 18.679: 99.7790% ( 10) 00:10:08.040 18.679 - 18.773: 99.8158% ( 5) 00:10:08.040 18.773 - 18.868: 99.8453% ( 4) 00:10:08.040 18.868 - 18.963: 99.8748% ( 4) 00:10:08.040 18.963 - 19.058: 99.8821% ( 1) 00:10:08.040 19.058 - 19.153: 99.8895% ( 1) 00:10:08.040 19.153 - 19.247: 99.8969% ( 1) 00:10:08.040 19.437 - 19.532: 99.9042% ( 1) 00:10:08.040 23.135 - 23.230: 99.9116% ( 1) 00:10:08.040 23.799 - 23.893: 99.9263% ( 2) 00:10:08.040 3980.705 - 4004.978: 99.9779% ( 7) 00:10:08.040 4004.978 - 4029.250: 100.0000% ( 3) 00:10:08.040 00:10:08.040 Complete histogram 00:10:08.040 ================== 00:10:08.040 Range in us Cumulative Count 00:10:08.040 2.050 - 2.062: 0.0884% ( 12) 00:10:08.040 2.062 - 2.074: 25.7459% ( 3483) 00:10:08.040 2.074 - 2.086: 40.0147% ( 1937) 00:10:08.040 2.086 - 2.098: 43.3959% ( 459) 00:10:08.040 2.098 - 2.110: 58.0921% ( 1995) 00:10:08.040 2.110 - 2.121: 61.8858% ( 515) 00:10:08.040 2.121 - 2.133: 64.8692% ( 405) 00:10:08.040 2.133 - 2.145: 76.8398% ( 1625) 00:10:08.040 2.145 - 2.157: 79.9042% ( 416) 00:10:08.040 2.157 - 2.169: 82.4383% ( 344) 00:10:08.040 2.169 - 2.181: 87.2928% ( 659) 00:10:08.040 2.181 - 2.193: 88.7293% ( 195) 00:10:08.040 2.193 - 2.204: 89.5617% ( 113) 00:10:08.040 2.204 - 2.216: 90.8877% ( 180) 00:10:08.040 2.216 - 2.228: 92.4273% ( 209) 00:10:08.040 2.228 - 2.240: 93.6943% ( 172) 00:10:08.040 2.240 - 2.252: 94.4678% ( 105) 00:10:08.040 2.252 - 2.264: 94.8508% ( 52) 00:10:08.040 2.264 - 2.276: 95.0645% ( 29) 00:10:08.040 2.276 - 2.287: 95.2486% ( 25) 00:10:08.040 2.287 - 2.299: 95.5801% ( 45) 00:10:08.040 2.299 - 2.311: 95.8158% ( 32) 00:10:08.040 2.311 - 2.323: 95.8969% ( 11) 00:10:08.040 2.323 - 2.335: 95.9632% ( 9) 00:10:08.040 2.335 - 2.347: 96.0074% ( 6) 00:10:08.040 2.347 - 2.359: 96.0368% ( 4) 00:10:08.040 2.359 - 2.370: 96.1547% ( 16) 00:10:08.040 2.370 - 2.382: 96.3094% ( 21) 00:10:08.040 2.382 - 2.394: 96.5157% ( 28) 00:10:08.040 2.394 - 2.406: 96.7145% ( 27) 00:10:08.040 2.406 - 2.418: 96.9724% ( 35) 00:10:08.040 2.418 - 2.430: 97.1713% ( 27) 00:10:08.040 2.430 - 2.441: 97.3554% ( 25) 00:10:08.040 2.441 - 2.453: 97.5691% ( 29) 00:10:08.040 2.453 - 2.465: 97.7385% ( 23) 00:10:08.040 2.465 - 2.477: 97.8932% ( 21) 00:10:08.040 2.477 - 2.489: 97.9816% ( 12) 00:10:08.040 2.489 - 2.501: 98.0921% ( 15) 00:10:08.040 2.501 - 2.513: 98.1805% ( 12) 00:10:08.040 2.513 - 2.524: 98.2320% ( 7) 00:10:08.040 2.524 - 2.536: 98.2615% ( 4) 00:10:08.040 2.536 - 2.548: 98.2836% ( 3) 00:10:08.040 2.548 - 2.560: 98.3131% ( 4) 00:10:08.040 2.560 - 2.572: 98.3425% ( 4) 00:10:08.040 2.572 - 2.584: 98.3573% ( 2) 00:10:08.040 2.596 - 2.607: 98.3867% ( 4) 00:10:08.040 2.607 - 2.619: 98.3941% ( 1) 00:10:08.040 2.619 - 2.631: 98.4088% ( 2) 00:10:08.040 2.631 - 2.643: 98.4236% ( 2) 00:10:08.040 2.643 - 2.655: 98.4383% ( 2) 00:10:08.040 2.655 - 2.667: 98.4457% ( 1) 00:10:08.040 2.679 - 2.690: 98.4530% ( 1) 00:10:08.040 2.702 - 2.714: 98.4678% ( 2) 00:10:08.040 2.714 - 2.726: 98.4825% ( 2) 00:10:08.040 2.726 - 2.738: 98.4899% ( 1) 00:10:08.040 2.738 - 2.750: 98.4972% ( 1) 00:10:08.040 2.761 - 2.773: 98.5046% ( 1) 00:10:08.040 2.785 - 2.797: 98.5120% ( 1) 00:10:08.040 2.821 - 2.833: 98.5193% ( 1) 00:10:08.040 3.176 - 3.200: 98.5267% ( 1) 00:10:08.040 3.366 - 3.390: 98.5341% ( 1) 00:10:08.040 3.390 - 3.413: 98.5414% ( 1) 00:10:08.040 3.413 - 3.437: 98.5488% ( 1) 00:10:08.040 3.461 - 3.484: 98.5709% ( 3) 00:10:08.040 3.484 - 3.508: 98.5783% ( 1) 00:10:08.040 3.508 - 3.532: 98.5930% ( 2) 00:10:08.040 3.627 - 3.650: 98.6004% ( 1) 00:10:08.040 3.650 - 3.674: 98.6077% ( 1) 00:10:08.040 3.674 - 3.698: 98.6151% ( 1) 00:10:08.040 3.698 - 3.721: 98.6298% ( 2) 00:10:08.040 3.745 - 3.769: 98.6372% ( 1) 00:10:08.040 3.769 - 3.793: 98.6593% ( 3) 00:10:08.040 3.793 - 3.816: 98.6740% ( 2) 00:10:08.040 3.816 - 3.840: 98.6888% ( 2) 00:10:08.040 3.864 - 3.887: 98.7109% ( 3) 00:10:08.040 3.959 - 3.982: 98.7182% ( 1) 00:10:08.040 3.982 - 4.006: 98.7477% ( 4) 00:10:08.040 4.030 - 4.053: 98.7624% ( 2) 00:10:08.040 4.053 - 4.077: 98.7698% ( 1) 00:10:08.040 4.101 - 4.124: 98.7845% ( 2) 00:10:08.040 5.381 - 5.404: 9[2024-07-15 19:04:48.046443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:08.040 8.7919% ( 1) 00:10:08.040 5.452 - 5.476: 98.7993% ( 1) 00:10:08.040 5.476 - 5.499: 98.8066% ( 1) 00:10:08.040 6.542 - 6.590: 98.8140% ( 1) 00:10:08.040 6.732 - 6.779: 98.8214% ( 1) 00:10:08.040 6.921 - 6.969: 98.8287% ( 1) 00:10:08.040 7.016 - 7.064: 98.8361% ( 1) 00:10:08.040 7.064 - 7.111: 98.8435% ( 1) 00:10:08.040 7.633 - 7.680: 98.8508% ( 1) 00:10:08.040 7.822 - 7.870: 98.8656% ( 2) 00:10:08.040 8.012 - 8.059: 98.8803% ( 2) 00:10:08.040 8.201 - 8.249: 98.8877% ( 1) 00:10:08.040 8.628 - 8.676: 98.8950% ( 1) 00:10:08.040 8.723 - 8.770: 98.9024% ( 1) 00:10:08.040 8.770 - 8.818: 98.9098% ( 1) 00:10:08.040 9.197 - 9.244: 98.9171% ( 1) 00:10:08.040 15.455 - 15.550: 98.9245% ( 1) 00:10:08.041 15.644 - 15.739: 98.9687% ( 6) 00:10:08.041 15.739 - 15.834: 98.9908% ( 3) 00:10:08.041 15.834 - 15.929: 99.0129% ( 3) 00:10:08.041 15.929 - 16.024: 99.0424% ( 4) 00:10:08.041 16.024 - 16.119: 99.0571% ( 2) 00:10:08.041 16.119 - 16.213: 99.1013% ( 6) 00:10:08.041 16.213 - 16.308: 99.1455% ( 6) 00:10:08.041 16.308 - 16.403: 99.1750% ( 4) 00:10:08.041 16.403 - 16.498: 99.2044% ( 4) 00:10:08.041 16.498 - 16.593: 99.2413% ( 5) 00:10:08.041 16.593 - 16.687: 99.2781% ( 5) 00:10:08.041 16.687 - 16.782: 99.2928% ( 2) 00:10:08.041 16.782 - 16.877: 99.3370% ( 6) 00:10:08.041 16.877 - 16.972: 99.3591% ( 3) 00:10:08.041 16.972 - 17.067: 99.3738% ( 2) 00:10:08.041 17.067 - 17.161: 99.3812% ( 1) 00:10:08.041 17.256 - 17.351: 99.3886% ( 1) 00:10:08.041 17.351 - 17.446: 99.4033% ( 2) 00:10:08.041 17.636 - 17.730: 99.4107% ( 1) 00:10:08.041 18.584 - 18.679: 99.4180% ( 1) 00:10:08.041 65.233 - 65.612: 99.4254% ( 1) 00:10:08.041 2014.625 - 2026.761: 99.4328% ( 1) 00:10:08.041 3980.705 - 4004.978: 99.9263% ( 67) 00:10:08.041 4004.978 - 4029.250: 99.9853% ( 8) 00:10:08.041 4029.250 - 4053.523: 99.9926% ( 1) 00:10:08.041 6990.507 - 7039.052: 100.0000% ( 1) 00:10:08.041 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:08.041 [ 00:10:08.041 { 00:10:08.041 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:08.041 "subtype": "Discovery", 00:10:08.041 "listen_addresses": [], 00:10:08.041 "allow_any_host": true, 00:10:08.041 "hosts": [] 00:10:08.041 }, 00:10:08.041 { 00:10:08.041 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:08.041 "subtype": "NVMe", 00:10:08.041 "listen_addresses": [ 00:10:08.041 { 00:10:08.041 "trtype": "VFIOUSER", 00:10:08.041 "adrfam": "IPv4", 00:10:08.041 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:08.041 "trsvcid": "0" 00:10:08.041 } 00:10:08.041 ], 00:10:08.041 "allow_any_host": true, 00:10:08.041 "hosts": [], 00:10:08.041 "serial_number": "SPDK1", 00:10:08.041 "model_number": "SPDK bdev Controller", 00:10:08.041 "max_namespaces": 32, 00:10:08.041 "min_cntlid": 1, 00:10:08.041 "max_cntlid": 65519, 00:10:08.041 "namespaces": [ 00:10:08.041 { 00:10:08.041 "nsid": 1, 00:10:08.041 "bdev_name": "Malloc1", 00:10:08.041 "name": "Malloc1", 00:10:08.041 "nguid": "88006F59288C4DBD8F3BBC3FA75F4B09", 00:10:08.041 "uuid": "88006f59-288c-4dbd-8f3b-bc3fa75f4b09" 00:10:08.041 } 00:10:08.041 ] 00:10:08.041 }, 00:10:08.041 { 00:10:08.041 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:08.041 "subtype": "NVMe", 00:10:08.041 "listen_addresses": [ 00:10:08.041 { 00:10:08.041 "trtype": "VFIOUSER", 00:10:08.041 "adrfam": "IPv4", 00:10:08.041 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:08.041 "trsvcid": "0" 00:10:08.041 } 00:10:08.041 ], 00:10:08.041 "allow_any_host": true, 00:10:08.041 "hosts": [], 00:10:08.041 "serial_number": "SPDK2", 00:10:08.041 "model_number": "SPDK bdev Controller", 00:10:08.041 "max_namespaces": 32, 00:10:08.041 "min_cntlid": 1, 00:10:08.041 "max_cntlid": 65519, 00:10:08.041 "namespaces": [ 00:10:08.041 { 00:10:08.041 "nsid": 1, 00:10:08.041 "bdev_name": "Malloc2", 00:10:08.041 "name": "Malloc2", 00:10:08.041 "nguid": "DAF8B823D1BF4B81B5759C27A7E581FF", 00:10:08.041 "uuid": "daf8b823-d1bf-4b81-b575-9c27a7e581ff" 00:10:08.041 } 00:10:08.041 ] 00:10:08.041 } 00:10:08.041 ] 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3251182 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:08.041 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:08.041 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.299 [2024-07-15 19:04:48.493322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:08.299 Malloc3 00:10:08.299 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:08.556 [2024-07-15 19:04:48.865983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:08.556 19:04:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:08.556 Asynchronous Event Request test 00:10:08.556 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:08.556 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:08.556 Registering asynchronous event callbacks... 00:10:08.556 Starting namespace attribute notice tests for all controllers... 00:10:08.556 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:08.556 aer_cb - Changed Namespace 00:10:08.556 Cleaning up... 00:10:08.815 [ 00:10:08.815 { 00:10:08.815 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:08.815 "subtype": "Discovery", 00:10:08.815 "listen_addresses": [], 00:10:08.815 "allow_any_host": true, 00:10:08.815 "hosts": [] 00:10:08.815 }, 00:10:08.815 { 00:10:08.815 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:08.815 "subtype": "NVMe", 00:10:08.815 "listen_addresses": [ 00:10:08.815 { 00:10:08.815 "trtype": "VFIOUSER", 00:10:08.815 "adrfam": "IPv4", 00:10:08.815 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:08.815 "trsvcid": "0" 00:10:08.815 } 00:10:08.815 ], 00:10:08.815 "allow_any_host": true, 00:10:08.815 "hosts": [], 00:10:08.815 "serial_number": "SPDK1", 00:10:08.815 "model_number": "SPDK bdev Controller", 00:10:08.815 "max_namespaces": 32, 00:10:08.815 "min_cntlid": 1, 00:10:08.815 "max_cntlid": 65519, 00:10:08.815 "namespaces": [ 00:10:08.815 { 00:10:08.815 "nsid": 1, 00:10:08.815 "bdev_name": "Malloc1", 00:10:08.815 "name": "Malloc1", 00:10:08.815 "nguid": "88006F59288C4DBD8F3BBC3FA75F4B09", 00:10:08.815 "uuid": "88006f59-288c-4dbd-8f3b-bc3fa75f4b09" 00:10:08.815 }, 00:10:08.815 { 00:10:08.815 "nsid": 2, 00:10:08.815 "bdev_name": "Malloc3", 00:10:08.815 "name": "Malloc3", 00:10:08.815 "nguid": "C19A3450C59A41B3949E8841B73EAC9A", 00:10:08.815 "uuid": "c19a3450-c59a-41b3-949e-8841b73eac9a" 00:10:08.815 } 00:10:08.815 ] 00:10:08.815 }, 00:10:08.815 { 00:10:08.815 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:08.815 "subtype": "NVMe", 00:10:08.815 "listen_addresses": [ 00:10:08.815 { 00:10:08.815 "trtype": "VFIOUSER", 00:10:08.815 "adrfam": "IPv4", 00:10:08.815 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:08.815 "trsvcid": "0" 00:10:08.815 } 00:10:08.815 ], 00:10:08.815 "allow_any_host": true, 00:10:08.815 "hosts": [], 00:10:08.815 "serial_number": "SPDK2", 00:10:08.815 "model_number": "SPDK bdev Controller", 00:10:08.815 "max_namespaces": 32, 00:10:08.815 "min_cntlid": 1, 00:10:08.815 "max_cntlid": 65519, 00:10:08.815 "namespaces": [ 00:10:08.815 { 00:10:08.815 "nsid": 1, 00:10:08.815 "bdev_name": "Malloc2", 00:10:08.815 "name": "Malloc2", 00:10:08.815 "nguid": "DAF8B823D1BF4B81B5759C27A7E581FF", 00:10:08.815 "uuid": "daf8b823-d1bf-4b81-b575-9c27a7e581ff" 00:10:08.815 } 00:10:08.815 ] 00:10:08.815 } 00:10:08.815 ] 00:10:08.815 19:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3251182 00:10:08.815 19:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:08.815 19:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:08.815 19:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:08.815 19:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:08.816 [2024-07-15 19:04:49.142619] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:10:08.816 [2024-07-15 19:04:49.142659] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251315 ] 00:10:08.816 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.816 [2024-07-15 19:04:49.175977] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:08.816 [2024-07-15 19:04:49.184184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:08.816 [2024-07-15 19:04:49.184215] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbc424dc000 00:10:08.816 [2024-07-15 19:04:49.185182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.186192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.187183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.188201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.189192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.190197] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.191204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.192212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:08.816 [2024-07-15 19:04:49.193220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:08.816 [2024-07-15 19:04:49.193243] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbc424d1000 00:10:08.816 [2024-07-15 19:04:49.194357] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:08.816 [2024-07-15 19:04:49.210522] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:08.816 [2024-07-15 19:04:49.210557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:08.816 [2024-07-15 19:04:49.212655] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:08.816 [2024-07-15 19:04:49.212705] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:08.816 [2024-07-15 19:04:49.212786] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:08.816 [2024-07-15 19:04:49.212810] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:08.816 [2024-07-15 19:04:49.212820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:08.816 [2024-07-15 19:04:49.213902] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:08.816 [2024-07-15 19:04:49.213923] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:08.816 [2024-07-15 19:04:49.213937] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:08.816 [2024-07-15 19:04:49.217887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:08.816 [2024-07-15 19:04:49.217908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:08.816 [2024-07-15 19:04:49.217923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:08.816 [2024-07-15 19:04:49.218700] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:08.816 [2024-07-15 19:04:49.218720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:08.816 [2024-07-15 19:04:49.219705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:08.816 [2024-07-15 19:04:49.219725] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:08.816 [2024-07-15 19:04:49.219735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:08.816 [2024-07-15 19:04:49.219746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:08.816 [2024-07-15 19:04:49.219856] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:08.816 [2024-07-15 19:04:49.219886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:08.816 [2024-07-15 19:04:49.219895] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:08.816 [2024-07-15 19:04:49.220713] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:08.816 [2024-07-15 19:04:49.221720] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:08.816 [2024-07-15 19:04:49.222734] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:08.816 [2024-07-15 19:04:49.223728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:08.816 [2024-07-15 19:04:49.223810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:08.816 [2024-07-15 19:04:49.224745] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:08.816 [2024-07-15 19:04:49.224766] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:08.816 [2024-07-15 19:04:49.224776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.224801] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:08.816 [2024-07-15 19:04:49.224814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.224835] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:08.816 [2024-07-15 19:04:49.224846] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:08.816 [2024-07-15 19:04:49.224865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:08.816 [2024-07-15 19:04:49.228890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:08.816 [2024-07-15 19:04:49.228913] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:08.816 [2024-07-15 19:04:49.228927] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:08.816 [2024-07-15 19:04:49.228936] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:08.816 [2024-07-15 19:04:49.228944] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:08.816 [2024-07-15 19:04:49.228956] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:08.816 [2024-07-15 19:04:49.228965] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:08.816 [2024-07-15 19:04:49.228973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.228987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.229003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:08.816 [2024-07-15 19:04:49.236887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:08.816 [2024-07-15 19:04:49.236917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.816 [2024-07-15 19:04:49.236932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.816 [2024-07-15 19:04:49.236945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.816 [2024-07-15 19:04:49.236956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.816 [2024-07-15 19:04:49.236966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.236982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.236998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:08.816 [2024-07-15 19:04:49.244890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:08.816 [2024-07-15 19:04:49.244908] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:08.816 [2024-07-15 19:04:49.244932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.244945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.244955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:08.816 [2024-07-15 19:04:49.244969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:09.074 [2024-07-15 19:04:49.252904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:09.074 [2024-07-15 19:04:49.252976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:09.074 [2024-07-15 19:04:49.252992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:09.074 [2024-07-15 19:04:49.253006] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:09.074 [2024-07-15 19:04:49.253015] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:09.074 [2024-07-15 19:04:49.253025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:09.074 [2024-07-15 19:04:49.260888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:09.074 [2024-07-15 19:04:49.260911] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:09.074 [2024-07-15 19:04:49.260928] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:09.074 [2024-07-15 19:04:49.260942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:09.074 [2024-07-15 19:04:49.260955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:09.074 [2024-07-15 19:04:49.260964] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:09.074 [2024-07-15 19:04:49.260974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:09.074 [2024-07-15 19:04:49.268891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:09.074 [2024-07-15 19:04:49.268919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:09.074 [2024-07-15 19:04:49.268935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:09.074 [2024-07-15 19:04:49.268949] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:09.074 [2024-07-15 19:04:49.268958] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:09.075 [2024-07-15 19:04:49.268967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.276889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.276911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:09.075 [2024-07-15 19:04:49.276924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:09.075 [2024-07-15 19:04:49.276938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:09.075 [2024-07-15 19:04:49.276949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:09.075 [2024-07-15 19:04:49.276957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:09.075 [2024-07-15 19:04:49.276965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:09.075 [2024-07-15 19:04:49.276973] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:09.075 [2024-07-15 19:04:49.276981] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:09.075 [2024-07-15 19:04:49.276989] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:09.075 [2024-07-15 19:04:49.277013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.284886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.284920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.292888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.292914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.300889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.300914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.308904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.308935] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:09.075 [2024-07-15 19:04:49.308946] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:09.075 [2024-07-15 19:04:49.308953] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:09.075 [2024-07-15 19:04:49.308958] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:09.075 [2024-07-15 19:04:49.308968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:09.075 [2024-07-15 19:04:49.308980] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:09.075 [2024-07-15 19:04:49.308989] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:09.075 [2024-07-15 19:04:49.308998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.309009] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:09.075 [2024-07-15 19:04:49.309017] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:09.075 [2024-07-15 19:04:49.309026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.309038] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:09.075 [2024-07-15 19:04:49.309046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:09.075 [2024-07-15 19:04:49.309055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.316891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.316919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.316938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.316951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:09.075 ===================================================== 00:10:09.075 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:09.075 ===================================================== 00:10:09.075 Controller Capabilities/Features 00:10:09.075 ================================ 00:10:09.075 Vendor ID: 4e58 00:10:09.075 Subsystem Vendor ID: 4e58 00:10:09.075 Serial Number: SPDK2 00:10:09.075 Model Number: SPDK bdev Controller 00:10:09.075 Firmware Version: 24.09 00:10:09.075 Recommended Arb Burst: 6 00:10:09.075 IEEE OUI Identifier: 8d 6b 50 00:10:09.075 Multi-path I/O 00:10:09.075 May have multiple subsystem ports: Yes 00:10:09.075 May have multiple controllers: Yes 00:10:09.075 Associated with SR-IOV VF: No 00:10:09.075 Max Data Transfer Size: 131072 00:10:09.075 Max Number of Namespaces: 32 00:10:09.075 Max Number of I/O Queues: 127 00:10:09.075 NVMe Specification Version (VS): 1.3 00:10:09.075 NVMe Specification Version (Identify): 1.3 00:10:09.075 Maximum Queue Entries: 256 00:10:09.075 Contiguous Queues Required: Yes 00:10:09.075 Arbitration Mechanisms Supported 00:10:09.075 Weighted Round Robin: Not Supported 00:10:09.075 Vendor Specific: Not Supported 00:10:09.075 Reset Timeout: 15000 ms 00:10:09.075 Doorbell Stride: 4 bytes 00:10:09.075 NVM Subsystem Reset: Not Supported 00:10:09.075 Command Sets Supported 00:10:09.075 NVM Command Set: Supported 00:10:09.075 Boot Partition: Not Supported 00:10:09.075 Memory Page Size Minimum: 4096 bytes 00:10:09.075 Memory Page Size Maximum: 4096 bytes 00:10:09.075 Persistent Memory Region: Not Supported 00:10:09.075 Optional Asynchronous Events Supported 00:10:09.075 Namespace Attribute Notices: Supported 00:10:09.075 Firmware Activation Notices: Not Supported 00:10:09.075 ANA Change Notices: Not Supported 00:10:09.075 PLE Aggregate Log Change Notices: Not Supported 00:10:09.075 LBA Status Info Alert Notices: Not Supported 00:10:09.075 EGE Aggregate Log Change Notices: Not Supported 00:10:09.075 Normal NVM Subsystem Shutdown event: Not Supported 00:10:09.075 Zone Descriptor Change Notices: Not Supported 00:10:09.075 Discovery Log Change Notices: Not Supported 00:10:09.075 Controller Attributes 00:10:09.075 128-bit Host Identifier: Supported 00:10:09.075 Non-Operational Permissive Mode: Not Supported 00:10:09.075 NVM Sets: Not Supported 00:10:09.075 Read Recovery Levels: Not Supported 00:10:09.075 Endurance Groups: Not Supported 00:10:09.075 Predictable Latency Mode: Not Supported 00:10:09.075 Traffic Based Keep ALive: Not Supported 00:10:09.075 Namespace Granularity: Not Supported 00:10:09.075 SQ Associations: Not Supported 00:10:09.075 UUID List: Not Supported 00:10:09.075 Multi-Domain Subsystem: Not Supported 00:10:09.075 Fixed Capacity Management: Not Supported 00:10:09.075 Variable Capacity Management: Not Supported 00:10:09.075 Delete Endurance Group: Not Supported 00:10:09.075 Delete NVM Set: Not Supported 00:10:09.075 Extended LBA Formats Supported: Not Supported 00:10:09.075 Flexible Data Placement Supported: Not Supported 00:10:09.075 00:10:09.075 Controller Memory Buffer Support 00:10:09.075 ================================ 00:10:09.075 Supported: No 00:10:09.075 00:10:09.075 Persistent Memory Region Support 00:10:09.075 ================================ 00:10:09.075 Supported: No 00:10:09.075 00:10:09.075 Admin Command Set Attributes 00:10:09.075 ============================ 00:10:09.075 Security Send/Receive: Not Supported 00:10:09.075 Format NVM: Not Supported 00:10:09.075 Firmware Activate/Download: Not Supported 00:10:09.075 Namespace Management: Not Supported 00:10:09.075 Device Self-Test: Not Supported 00:10:09.075 Directives: Not Supported 00:10:09.075 NVMe-MI: Not Supported 00:10:09.075 Virtualization Management: Not Supported 00:10:09.075 Doorbell Buffer Config: Not Supported 00:10:09.075 Get LBA Status Capability: Not Supported 00:10:09.075 Command & Feature Lockdown Capability: Not Supported 00:10:09.075 Abort Command Limit: 4 00:10:09.075 Async Event Request Limit: 4 00:10:09.075 Number of Firmware Slots: N/A 00:10:09.075 Firmware Slot 1 Read-Only: N/A 00:10:09.075 Firmware Activation Without Reset: N/A 00:10:09.075 Multiple Update Detection Support: N/A 00:10:09.075 Firmware Update Granularity: No Information Provided 00:10:09.075 Per-Namespace SMART Log: No 00:10:09.075 Asymmetric Namespace Access Log Page: Not Supported 00:10:09.075 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:09.075 Command Effects Log Page: Supported 00:10:09.075 Get Log Page Extended Data: Supported 00:10:09.075 Telemetry Log Pages: Not Supported 00:10:09.075 Persistent Event Log Pages: Not Supported 00:10:09.075 Supported Log Pages Log Page: May Support 00:10:09.075 Commands Supported & Effects Log Page: Not Supported 00:10:09.075 Feature Identifiers & Effects Log Page:May Support 00:10:09.075 NVMe-MI Commands & Effects Log Page: May Support 00:10:09.075 Data Area 4 for Telemetry Log: Not Supported 00:10:09.075 Error Log Page Entries Supported: 128 00:10:09.075 Keep Alive: Supported 00:10:09.075 Keep Alive Granularity: 10000 ms 00:10:09.075 00:10:09.075 NVM Command Set Attributes 00:10:09.075 ========================== 00:10:09.075 Submission Queue Entry Size 00:10:09.075 Max: 64 00:10:09.075 Min: 64 00:10:09.075 Completion Queue Entry Size 00:10:09.075 Max: 16 00:10:09.075 Min: 16 00:10:09.075 Number of Namespaces: 32 00:10:09.075 Compare Command: Supported 00:10:09.075 Write Uncorrectable Command: Not Supported 00:10:09.075 Dataset Management Command: Supported 00:10:09.075 Write Zeroes Command: Supported 00:10:09.075 Set Features Save Field: Not Supported 00:10:09.075 Reservations: Not Supported 00:10:09.075 Timestamp: Not Supported 00:10:09.075 Copy: Supported 00:10:09.075 Volatile Write Cache: Present 00:10:09.075 Atomic Write Unit (Normal): 1 00:10:09.075 Atomic Write Unit (PFail): 1 00:10:09.075 Atomic Compare & Write Unit: 1 00:10:09.075 Fused Compare & Write: Supported 00:10:09.075 Scatter-Gather List 00:10:09.075 SGL Command Set: Supported (Dword aligned) 00:10:09.075 SGL Keyed: Not Supported 00:10:09.075 SGL Bit Bucket Descriptor: Not Supported 00:10:09.075 SGL Metadata Pointer: Not Supported 00:10:09.075 Oversized SGL: Not Supported 00:10:09.075 SGL Metadata Address: Not Supported 00:10:09.075 SGL Offset: Not Supported 00:10:09.075 Transport SGL Data Block: Not Supported 00:10:09.075 Replay Protected Memory Block: Not Supported 00:10:09.075 00:10:09.075 Firmware Slot Information 00:10:09.075 ========================= 00:10:09.075 Active slot: 1 00:10:09.075 Slot 1 Firmware Revision: 24.09 00:10:09.075 00:10:09.075 00:10:09.075 Commands Supported and Effects 00:10:09.075 ============================== 00:10:09.075 Admin Commands 00:10:09.075 -------------- 00:10:09.075 Get Log Page (02h): Supported 00:10:09.075 Identify (06h): Supported 00:10:09.075 Abort (08h): Supported 00:10:09.075 Set Features (09h): Supported 00:10:09.075 Get Features (0Ah): Supported 00:10:09.075 Asynchronous Event Request (0Ch): Supported 00:10:09.075 Keep Alive (18h): Supported 00:10:09.075 I/O Commands 00:10:09.075 ------------ 00:10:09.075 Flush (00h): Supported LBA-Change 00:10:09.075 Write (01h): Supported LBA-Change 00:10:09.075 Read (02h): Supported 00:10:09.075 Compare (05h): Supported 00:10:09.075 Write Zeroes (08h): Supported LBA-Change 00:10:09.075 Dataset Management (09h): Supported LBA-Change 00:10:09.075 Copy (19h): Supported LBA-Change 00:10:09.075 00:10:09.075 Error Log 00:10:09.075 ========= 00:10:09.075 00:10:09.075 Arbitration 00:10:09.075 =========== 00:10:09.075 Arbitration Burst: 1 00:10:09.075 00:10:09.075 Power Management 00:10:09.075 ================ 00:10:09.075 Number of Power States: 1 00:10:09.075 Current Power State: Power State #0 00:10:09.075 Power State #0: 00:10:09.075 Max Power: 0.00 W 00:10:09.075 Non-Operational State: Operational 00:10:09.075 Entry Latency: Not Reported 00:10:09.075 Exit Latency: Not Reported 00:10:09.075 Relative Read Throughput: 0 00:10:09.075 Relative Read Latency: 0 00:10:09.075 Relative Write Throughput: 0 00:10:09.075 Relative Write Latency: 0 00:10:09.075 Idle Power: Not Reported 00:10:09.075 Active Power: Not Reported 00:10:09.075 Non-Operational Permissive Mode: Not Supported 00:10:09.075 00:10:09.075 Health Information 00:10:09.075 ================== 00:10:09.075 Critical Warnings: 00:10:09.075 Available Spare Space: OK 00:10:09.075 Temperature: OK 00:10:09.075 Device Reliability: OK 00:10:09.075 Read Only: No 00:10:09.075 Volatile Memory Backup: OK 00:10:09.075 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:09.075 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:09.075 Available Spare: 0% 00:10:09.075 Available Sp[2024-07-15 19:04:49.317070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:09.075 [2024-07-15 19:04:49.324886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.324938] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:09.075 [2024-07-15 19:04:49.324957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.324974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.324985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.324996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.075 [2024-07-15 19:04:49.325080] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:09.075 [2024-07-15 19:04:49.325102] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:09.075 [2024-07-15 19:04:49.326090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:09.075 [2024-07-15 19:04:49.326175] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:09.075 [2024-07-15 19:04:49.326191] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:09.075 [2024-07-15 19:04:49.327103] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:09.075 [2024-07-15 19:04:49.327128] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:09.075 [2024-07-15 19:04:49.327180] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:09.075 [2024-07-15 19:04:49.329888] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:09.075 are Threshold: 0% 00:10:09.075 Life Percentage Used: 0% 00:10:09.075 Data Units Read: 0 00:10:09.075 Data Units Written: 0 00:10:09.075 Host Read Commands: 0 00:10:09.075 Host Write Commands: 0 00:10:09.075 Controller Busy Time: 0 minutes 00:10:09.075 Power Cycles: 0 00:10:09.075 Power On Hours: 0 hours 00:10:09.075 Unsafe Shutdowns: 0 00:10:09.075 Unrecoverable Media Errors: 0 00:10:09.076 Lifetime Error Log Entries: 0 00:10:09.076 Warning Temperature Time: 0 minutes 00:10:09.076 Critical Temperature Time: 0 minutes 00:10:09.076 00:10:09.076 Number of Queues 00:10:09.076 ================ 00:10:09.076 Number of I/O Submission Queues: 127 00:10:09.076 Number of I/O Completion Queues: 127 00:10:09.076 00:10:09.076 Active Namespaces 00:10:09.076 ================= 00:10:09.076 Namespace ID:1 00:10:09.076 Error Recovery Timeout: Unlimited 00:10:09.076 Command Set Identifier: NVM (00h) 00:10:09.076 Deallocate: Supported 00:10:09.076 Deallocated/Unwritten Error: Not Supported 00:10:09.076 Deallocated Read Value: Unknown 00:10:09.076 Deallocate in Write Zeroes: Not Supported 00:10:09.076 Deallocated Guard Field: 0xFFFF 00:10:09.076 Flush: Supported 00:10:09.076 Reservation: Supported 00:10:09.076 Namespace Sharing Capabilities: Multiple Controllers 00:10:09.076 Size (in LBAs): 131072 (0GiB) 00:10:09.076 Capacity (in LBAs): 131072 (0GiB) 00:10:09.076 Utilization (in LBAs): 131072 (0GiB) 00:10:09.076 NGUID: DAF8B823D1BF4B81B5759C27A7E581FF 00:10:09.076 UUID: daf8b823-d1bf-4b81-b575-9c27a7e581ff 00:10:09.076 Thin Provisioning: Not Supported 00:10:09.076 Per-NS Atomic Units: Yes 00:10:09.076 Atomic Boundary Size (Normal): 0 00:10:09.076 Atomic Boundary Size (PFail): 0 00:10:09.076 Atomic Boundary Offset: 0 00:10:09.076 Maximum Single Source Range Length: 65535 00:10:09.076 Maximum Copy Length: 65535 00:10:09.076 Maximum Source Range Count: 1 00:10:09.076 NGUID/EUI64 Never Reused: No 00:10:09.076 Namespace Write Protected: No 00:10:09.076 Number of LBA Formats: 1 00:10:09.076 Current LBA Format: LBA Format #00 00:10:09.076 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:09.076 00:10:09.076 19:04:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:09.076 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.383 [2024-07-15 19:04:49.547600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:14.657 Initializing NVMe Controllers 00:10:14.657 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:14.657 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:14.657 Initialization complete. Launching workers. 00:10:14.657 ======================================================== 00:10:14.657 Latency(us) 00:10:14.657 Device Information : IOPS MiB/s Average min max 00:10:14.657 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35091.78 137.08 3646.96 1173.39 9887.57 00:10:14.657 ======================================================== 00:10:14.657 Total : 35091.78 137.08 3646.96 1173.39 9887.57 00:10:14.657 00:10:14.657 [2024-07-15 19:04:54.654253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:14.657 19:04:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:14.657 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.657 [2024-07-15 19:04:54.885839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:19.996 Initializing NVMe Controllers 00:10:19.996 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:19.996 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:19.996 Initialization complete. Launching workers. 00:10:19.996 ======================================================== 00:10:19.996 Latency(us) 00:10:19.996 Device Information : IOPS MiB/s Average min max 00:10:19.996 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32597.87 127.34 3926.00 1205.38 9253.49 00:10:19.996 ======================================================== 00:10:19.996 Total : 32597.87 127.34 3926.00 1205.38 9253.49 00:10:19.996 00:10:19.996 [2024-07-15 19:04:59.903802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:19.996 19:04:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:19.996 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.996 [2024-07-15 19:05:00.118787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:25.267 [2024-07-15 19:05:05.259027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:25.267 Initializing NVMe Controllers 00:10:25.267 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:25.267 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:25.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:25.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:25.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:25.267 Initialization complete. Launching workers. 00:10:25.267 Starting thread on core 2 00:10:25.267 Starting thread on core 3 00:10:25.267 Starting thread on core 1 00:10:25.267 19:05:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:25.267 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.267 [2024-07-15 19:05:05.573348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:28.557 [2024-07-15 19:05:08.967144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:28.818 Initializing NVMe Controllers 00:10:28.818 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:28.818 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:28.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:28.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:28.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:28.818 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:28.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:28.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:28.818 Initialization complete. Launching workers. 00:10:28.818 Starting thread on core 1 with urgent priority queue 00:10:28.818 Starting thread on core 2 with urgent priority queue 00:10:28.818 Starting thread on core 3 with urgent priority queue 00:10:28.818 Starting thread on core 0 with urgent priority queue 00:10:28.818 SPDK bdev Controller (SPDK2 ) core 0: 854.67 IO/s 117.00 secs/100000 ios 00:10:28.818 SPDK bdev Controller (SPDK2 ) core 1: 826.67 IO/s 120.97 secs/100000 ios 00:10:28.818 SPDK bdev Controller (SPDK2 ) core 2: 866.67 IO/s 115.38 secs/100000 ios 00:10:28.818 SPDK bdev Controller (SPDK2 ) core 3: 768.33 IO/s 130.15 secs/100000 ios 00:10:28.818 ======================================================== 00:10:28.818 00:10:28.818 19:05:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:28.818 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.079 [2024-07-15 19:05:09.275385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:29.079 Initializing NVMe Controllers 00:10:29.079 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:29.079 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:29.079 Namespace ID: 1 size: 0GB 00:10:29.079 Initialization complete. 00:10:29.079 INFO: using host memory buffer for IO 00:10:29.079 Hello world! 00:10:29.079 [2024-07-15 19:05:09.287450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:29.079 19:05:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:29.079 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.337 [2024-07-15 19:05:09.579226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:30.275 Initializing NVMe Controllers 00:10:30.275 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:30.275 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:30.275 Initialization complete. Launching workers. 00:10:30.275 submit (in ns) avg, min, max = 7975.6, 3502.2, 4016304.4 00:10:30.275 complete (in ns) avg, min, max = 24929.7, 2052.2, 5996274.4 00:10:30.275 00:10:30.275 Submit histogram 00:10:30.275 ================ 00:10:30.275 Range in us Cumulative Count 00:10:30.275 3.484 - 3.508: 0.0526% ( 7) 00:10:30.275 3.508 - 3.532: 0.6164% ( 75) 00:10:30.275 3.532 - 3.556: 1.9994% ( 184) 00:10:30.275 3.556 - 3.579: 5.7877% ( 504) 00:10:30.275 3.579 - 3.603: 12.1317% ( 844) 00:10:30.275 3.603 - 3.627: 20.6103% ( 1128) 00:10:30.275 3.627 - 3.650: 30.7952% ( 1355) 00:10:30.275 3.650 - 3.674: 39.5821% ( 1169) 00:10:30.275 3.674 - 3.698: 46.6777% ( 944) 00:10:30.275 3.698 - 3.721: 52.7435% ( 807) 00:10:30.275 3.721 - 3.745: 57.1708% ( 589) 00:10:30.275 3.745 - 3.769: 61.2974% ( 549) 00:10:30.275 3.769 - 3.793: 64.8001% ( 466) 00:10:30.275 3.793 - 3.816: 68.1975% ( 452) 00:10:30.275 3.816 - 3.840: 71.2718% ( 409) 00:10:30.275 3.840 - 3.864: 75.2480% ( 529) 00:10:30.275 3.864 - 3.887: 79.1115% ( 514) 00:10:30.275 3.887 - 3.911: 82.4263% ( 441) 00:10:30.275 3.911 - 3.935: 85.2676% ( 378) 00:10:30.275 3.935 - 3.959: 87.2520% ( 264) 00:10:30.275 3.959 - 3.982: 88.9131% ( 221) 00:10:30.275 3.982 - 4.006: 90.2811% ( 182) 00:10:30.275 4.006 - 4.030: 91.5289% ( 166) 00:10:30.275 4.030 - 4.053: 92.5060% ( 130) 00:10:30.275 4.053 - 4.077: 93.4456% ( 125) 00:10:30.275 4.077 - 4.101: 94.2273% ( 104) 00:10:30.275 4.101 - 4.124: 94.8737% ( 86) 00:10:30.275 4.124 - 4.148: 95.3999% ( 70) 00:10:30.275 4.148 - 4.172: 95.7156% ( 42) 00:10:30.275 4.172 - 4.196: 96.0238% ( 41) 00:10:30.275 4.196 - 4.219: 96.1816% ( 21) 00:10:30.275 4.219 - 4.243: 96.3620% ( 24) 00:10:30.275 4.243 - 4.267: 96.5048% ( 19) 00:10:30.275 4.267 - 4.290: 96.6176% ( 15) 00:10:30.275 4.290 - 4.314: 96.6927% ( 10) 00:10:30.275 4.314 - 4.338: 96.7679% ( 10) 00:10:30.275 4.338 - 4.361: 96.8130% ( 6) 00:10:30.275 4.361 - 4.385: 96.8656% ( 7) 00:10:30.275 4.385 - 4.409: 96.9107% ( 6) 00:10:30.275 4.409 - 4.433: 96.9558% ( 6) 00:10:30.275 4.433 - 4.456: 96.9708% ( 2) 00:10:30.275 4.456 - 4.480: 96.9934% ( 3) 00:10:30.275 4.504 - 4.527: 97.0159% ( 3) 00:10:30.275 4.527 - 4.551: 97.0310% ( 2) 00:10:30.275 4.551 - 4.575: 97.0460% ( 2) 00:10:30.275 4.575 - 4.599: 97.0535% ( 1) 00:10:30.275 4.599 - 4.622: 97.0686% ( 2) 00:10:30.275 4.622 - 4.646: 97.1061% ( 5) 00:10:30.275 4.646 - 4.670: 97.1287% ( 3) 00:10:30.275 4.670 - 4.693: 97.1362% ( 1) 00:10:30.275 4.693 - 4.717: 97.1512% ( 2) 00:10:30.275 4.717 - 4.741: 97.1587% ( 1) 00:10:30.275 4.741 - 4.764: 97.1738% ( 2) 00:10:30.275 4.764 - 4.788: 97.1888% ( 2) 00:10:30.275 4.788 - 4.812: 97.2189% ( 4) 00:10:30.275 4.812 - 4.836: 97.2715% ( 7) 00:10:30.275 4.836 - 4.859: 97.3241% ( 7) 00:10:30.275 4.859 - 4.883: 97.3918% ( 9) 00:10:30.275 4.883 - 4.907: 97.4519% ( 8) 00:10:30.275 4.907 - 4.930: 97.4820% ( 4) 00:10:30.275 4.930 - 4.954: 97.5421% ( 8) 00:10:30.275 4.954 - 4.978: 97.5722% ( 4) 00:10:30.275 4.978 - 5.001: 97.6097% ( 5) 00:10:30.275 5.001 - 5.025: 97.6398% ( 4) 00:10:30.275 5.025 - 5.049: 97.6999% ( 8) 00:10:30.275 5.049 - 5.073: 97.7526% ( 7) 00:10:30.275 5.073 - 5.096: 97.7826% ( 4) 00:10:30.275 5.096 - 5.120: 97.8127% ( 4) 00:10:30.275 5.120 - 5.144: 97.8503% ( 5) 00:10:30.275 5.144 - 5.167: 97.8728% ( 3) 00:10:30.275 5.167 - 5.191: 97.9029% ( 4) 00:10:30.275 5.191 - 5.215: 97.9405% ( 5) 00:10:30.275 5.215 - 5.239: 97.9705% ( 4) 00:10:30.275 5.239 - 5.262: 97.9781% ( 1) 00:10:30.275 5.262 - 5.286: 98.0006% ( 3) 00:10:30.275 5.310 - 5.333: 98.0232% ( 3) 00:10:30.275 5.333 - 5.357: 98.0382% ( 2) 00:10:30.275 5.404 - 5.428: 98.0532% ( 2) 00:10:30.275 5.428 - 5.452: 98.0607% ( 1) 00:10:30.275 5.476 - 5.499: 98.0683% ( 1) 00:10:30.275 5.499 - 5.523: 98.0833% ( 2) 00:10:30.275 5.523 - 5.547: 98.0908% ( 1) 00:10:30.275 5.641 - 5.665: 98.0983% ( 1) 00:10:30.275 5.665 - 5.689: 98.1058% ( 1) 00:10:30.275 5.713 - 5.736: 98.1209% ( 2) 00:10:30.275 5.784 - 5.807: 98.1284% ( 1) 00:10:30.275 5.807 - 5.831: 98.1359% ( 1) 00:10:30.275 5.855 - 5.879: 98.1434% ( 1) 00:10:30.275 5.902 - 5.926: 98.1509% ( 1) 00:10:30.275 5.997 - 6.021: 98.1660% ( 2) 00:10:30.275 6.068 - 6.116: 98.1810% ( 2) 00:10:30.275 6.116 - 6.163: 98.1885% ( 1) 00:10:30.275 6.163 - 6.210: 98.1960% ( 1) 00:10:30.275 6.210 - 6.258: 98.2035% ( 1) 00:10:30.275 6.258 - 6.305: 98.2186% ( 2) 00:10:30.275 6.305 - 6.353: 98.2336% ( 2) 00:10:30.275 6.353 - 6.400: 98.2411% ( 1) 00:10:30.275 6.495 - 6.542: 98.2486% ( 1) 00:10:30.275 6.590 - 6.637: 98.2562% ( 1) 00:10:30.275 6.637 - 6.684: 98.2712% ( 2) 00:10:30.275 6.684 - 6.732: 98.2862% ( 2) 00:10:30.275 6.732 - 6.779: 98.2937% ( 1) 00:10:30.275 6.779 - 6.827: 98.3013% ( 1) 00:10:30.275 6.827 - 6.874: 98.3088% ( 1) 00:10:30.275 6.969 - 7.016: 98.3163% ( 1) 00:10:30.275 7.064 - 7.111: 98.3238% ( 1) 00:10:30.275 7.159 - 7.206: 98.3313% ( 1) 00:10:30.275 7.301 - 7.348: 98.3388% ( 1) 00:10:30.275 7.348 - 7.396: 98.3464% ( 1) 00:10:30.275 7.396 - 7.443: 98.3539% ( 1) 00:10:30.275 7.775 - 7.822: 98.3689% ( 2) 00:10:30.275 7.917 - 7.964: 98.3839% ( 2) 00:10:30.275 8.059 - 8.107: 98.3915% ( 1) 00:10:30.275 8.107 - 8.154: 98.4065% ( 2) 00:10:30.275 8.154 - 8.201: 98.4140% ( 1) 00:10:30.275 8.201 - 8.249: 98.4215% ( 1) 00:10:30.275 8.249 - 8.296: 98.4290% ( 1) 00:10:30.275 8.296 - 8.344: 98.4366% ( 1) 00:10:30.275 8.344 - 8.391: 98.4516% ( 2) 00:10:30.275 8.391 - 8.439: 98.4666% ( 2) 00:10:30.275 8.439 - 8.486: 98.4741% ( 1) 00:10:30.275 8.486 - 8.533: 98.4817% ( 1) 00:10:30.275 8.533 - 8.581: 98.4892% ( 1) 00:10:30.275 8.581 - 8.628: 98.4967% ( 1) 00:10:30.275 8.628 - 8.676: 98.5117% ( 2) 00:10:30.275 8.676 - 8.723: 98.5192% ( 1) 00:10:30.275 8.770 - 8.818: 98.5343% ( 2) 00:10:30.275 8.818 - 8.865: 98.5418% ( 1) 00:10:30.275 8.865 - 8.913: 98.5493% ( 1) 00:10:30.275 8.960 - 9.007: 98.5568% ( 1) 00:10:30.275 9.007 - 9.055: 98.5643% ( 1) 00:10:30.275 9.055 - 9.102: 98.5719% ( 1) 00:10:30.275 9.102 - 9.150: 98.5794% ( 1) 00:10:30.275 9.150 - 9.197: 98.5869% ( 1) 00:10:30.275 9.197 - 9.244: 98.6170% ( 4) 00:10:30.275 9.244 - 9.292: 98.6245% ( 1) 00:10:30.275 9.292 - 9.339: 98.6320% ( 1) 00:10:30.275 9.339 - 9.387: 98.6395% ( 1) 00:10:30.275 9.387 - 9.434: 98.6470% ( 1) 00:10:30.275 9.481 - 9.529: 98.6545% ( 1) 00:10:30.275 9.529 - 9.576: 98.6621% ( 1) 00:10:30.275 9.671 - 9.719: 98.6771% ( 2) 00:10:30.275 9.766 - 9.813: 98.6846% ( 1) 00:10:30.275 9.861 - 9.908: 98.6921% ( 1) 00:10:30.275 10.003 - 10.050: 98.7072% ( 2) 00:10:30.275 10.050 - 10.098: 98.7147% ( 1) 00:10:30.275 10.098 - 10.145: 98.7222% ( 1) 00:10:30.275 10.145 - 10.193: 98.7297% ( 1) 00:10:30.275 10.193 - 10.240: 98.7372% ( 1) 00:10:30.275 10.287 - 10.335: 98.7447% ( 1) 00:10:30.275 10.335 - 10.382: 98.7523% ( 1) 00:10:30.275 10.430 - 10.477: 98.7598% ( 1) 00:10:30.275 10.667 - 10.714: 98.7673% ( 1) 00:10:30.276 10.714 - 10.761: 98.7748% ( 1) 00:10:30.276 10.951 - 10.999: 98.7823% ( 1) 00:10:30.276 10.999 - 11.046: 98.8049% ( 3) 00:10:30.276 11.046 - 11.093: 98.8124% ( 1) 00:10:30.276 11.236 - 11.283: 98.8199% ( 1) 00:10:30.276 11.378 - 11.425: 98.8349% ( 2) 00:10:30.276 11.425 - 11.473: 98.8425% ( 1) 00:10:30.276 11.473 - 11.520: 98.8575% ( 2) 00:10:30.276 11.710 - 11.757: 98.8650% ( 1) 00:10:30.276 11.947 - 11.994: 98.8725% ( 1) 00:10:30.276 11.994 - 12.041: 98.8800% ( 1) 00:10:30.276 12.041 - 12.089: 98.8876% ( 1) 00:10:30.276 12.231 - 12.326: 98.8951% ( 1) 00:10:30.276 12.326 - 12.421: 98.9026% ( 1) 00:10:30.276 12.421 - 12.516: 98.9101% ( 1) 00:10:30.276 12.516 - 12.610: 98.9176% ( 1) 00:10:30.276 12.610 - 12.705: 98.9251% ( 1) 00:10:30.276 12.800 - 12.895: 98.9327% ( 1) 00:10:30.276 12.895 - 12.990: 98.9402% ( 1) 00:10:30.276 12.990 - 13.084: 98.9477% ( 1) 00:10:30.276 13.369 - 13.464: 98.9552% ( 1) 00:10:30.276 13.464 - 13.559: 98.9778% ( 3) 00:10:30.276 13.559 - 13.653: 99.0003% ( 3) 00:10:30.276 14.033 - 14.127: 99.0078% ( 1) 00:10:30.276 14.222 - 14.317: 99.0153% ( 1) 00:10:30.276 14.412 - 14.507: 99.0229% ( 1) 00:10:30.276 14.507 - 14.601: 99.0304% ( 1) 00:10:30.276 14.791 - 14.886: 99.0379% ( 1) 00:10:30.276 14.981 - 15.076: 99.0454% ( 1) 00:10:30.276 15.360 - 15.455: 99.0529% ( 1) 00:10:30.276 15.455 - 15.550: 99.0604% ( 1) 00:10:30.276 15.644 - 15.739: 99.0679% ( 1) 00:10:30.276 15.834 - 15.929: 99.0755% ( 1) 00:10:30.276 17.067 - 17.161: 99.0830% ( 1) 00:10:30.276 17.161 - 17.256: 99.0905% ( 1) 00:10:30.276 17.256 - 17.351: 99.1281% ( 5) 00:10:30.276 17.351 - 17.446: 99.1506% ( 3) 00:10:30.276 17.446 - 17.541: 99.1657% ( 2) 00:10:30.276 17.541 - 17.636: 99.1957% ( 4) 00:10:30.276 17.636 - 17.730: 99.2333% ( 5) 00:10:30.276 17.730 - 17.825: 99.2483% ( 2) 00:10:30.276 17.825 - 17.920: 99.2934% ( 6) 00:10:30.276 17.920 - 18.015: 99.3235% ( 4) 00:10:30.276 18.015 - 18.110: 99.3761% ( 7) 00:10:30.276 18.110 - 18.204: 99.4588% ( 11) 00:10:30.276 18.204 - 18.299: 99.5189% ( 8) 00:10:30.276 18.299 - 18.394: 99.5791% ( 8) 00:10:30.276 18.394 - 18.489: 99.6167% ( 5) 00:10:30.276 18.489 - 18.584: 99.6467% ( 4) 00:10:30.276 18.584 - 18.679: 99.6843% ( 5) 00:10:30.276 18.679 - 18.773: 99.7069% ( 3) 00:10:30.276 18.773 - 18.868: 99.7369% ( 4) 00:10:30.276 18.868 - 18.963: 99.7520% ( 2) 00:10:30.276 18.963 - 19.058: 99.7595% ( 1) 00:10:30.276 19.153 - 19.247: 99.7745% ( 2) 00:10:30.276 19.247 - 19.342: 99.7820% ( 1) 00:10:30.276 19.532 - 19.627: 99.7895% ( 1) 00:10:30.276 19.816 - 19.911: 99.7971% ( 1) 00:10:30.276 19.911 - 20.006: 99.8046% ( 1) 00:10:30.276 20.101 - 20.196: 99.8121% ( 1) 00:10:30.276 20.196 - 20.290: 99.8196% ( 1) 00:10:30.276 20.954 - 21.049: 99.8346% ( 2) 00:10:30.276 22.187 - 22.281: 99.8422% ( 1) 00:10:30.276 22.945 - 23.040: 99.8497% ( 1) 00:10:30.276 23.514 - 23.609: 99.8572% ( 1) 00:10:30.276 24.462 - 24.652: 99.8647% ( 1) 00:10:30.276 25.979 - 26.169: 99.8722% ( 1) 00:10:30.276 26.169 - 26.359: 99.8797% ( 1) 00:10:30.276 27.307 - 27.496: 99.8873% ( 1) 00:10:30.276 36.409 - 36.599: 99.8948% ( 1) 00:10:30.276 1614.127 - 1626.264: 99.9023% ( 1) 00:10:30.276 3980.705 - 4004.978: 99.9775% ( 10) 00:10:30.276 4004.978 - 4029.250: 100.0000% ( 3) 00:10:30.276 00:10:30.276 Complete histogram 00:10:30.276 ================== 00:10:30.276 Range in us Cumulative Count 00:10:30.276 2.050 - 2.062: 6.7273% ( 895) 00:10:30.276 2.062 - 2.074: 32.8322% ( 3473) 00:10:30.276 2.074 - 2.086: 35.7411% ( 387) 00:10:30.276 2.086 - 2.098: 48.4140% ( 1686) 00:10:30.276 2.098 - 2.110: 58.9973% ( 1408) 00:10:30.276 2.110 - 2.121: 60.4781% ( 197) 00:10:30.276 2.121 - 2.133: 67.7766% ( 971) 00:10:30.276 2.133 - 2.145: 73.7072% ( 789) 00:10:30.276 2.145 - 2.157: 74.7971% ( 145) 00:10:30.276 2.157 - 2.169: 80.5096% ( 760) 00:10:30.276 2.169 - 2.181: 82.9149% ( 320) 00:10:30.276 2.181 - 2.193: 83.6666% ( 100) 00:10:30.276 2.193 - 2.204: 85.8689% ( 293) 00:10:30.276 2.204 - 2.216: 88.5373% ( 355) 00:10:30.276 2.216 - 2.228: 90.4690% ( 257) 00:10:30.276 2.228 - 2.240: 92.5060% ( 271) 00:10:30.276 2.240 - 2.252: 93.9492% ( 192) 00:10:30.276 2.252 - 2.264: 94.3626% ( 55) 00:10:30.276 2.264 - 2.276: 94.6482% ( 38) 00:10:30.276 2.276 - 2.287: 95.0090% ( 48) 00:10:30.276 2.287 - 2.299: 95.4375% ( 57) 00:10:30.276 2.299 - 2.311: 95.5728% ( 18) 00:10:30.276 2.311 - 2.323: 95.6630% ( 12) 00:10:30.276 2.323 - 2.335: 95.7306% ( 9) 00:10:30.276 2.335 - 2.347: 95.7907% ( 8) 00:10:30.276 2.347 - 2.359: 95.9336% ( 19) 00:10:30.276 2.359 - 2.370: 96.1666% ( 31) 00:10:30.276 2.370 - 2.382: 96.4898% ( 43) 00:10:30.276 2.382 - 2.394: 96.7529% ( 35) 00:10:30.276 2.394 - 2.406: 97.0460% ( 39) 00:10:30.276 2.406 - 2.418: 97.2189% ( 23) 00:10:30.276 2.418 - 2.430: 97.5045% ( 38) 00:10:30.276 2.430 - 2.441: 97.6473% ( 19) 00:10:30.276 2.441 - 2.453: 97.8052% ( 21) 00:10:30.276 2.453 - 2.465: 97.9330% ( 17) 00:10:30.276 2.465 - 2.477: 97.9931% ( 8) 00:10:30.276 2.477 - 2.489: 98.0908% ( 13) 00:10:30.276 2.489 - 2.501: 98.1284% ( 5) 00:10:30.276 2.501 - 2.513: 98.1660% ( 5) 00:10:30.276 2.513 - 2.524: 98.2186% ( 7) 00:10:30.276 2.524 - 2.536: 98.2562% ( 5) 00:10:30.276 2.536 - 2.548: 98.2787% ( 3) 00:10:30.276 2.548 - 2.560: 98.2862% ( 1) 00:10:30.276 2.560 - 2.572: 98.3013% ( 2) 00:10:30.276 2.572 - 2.584: 98.3238% ( 3) 00:10:30.276 2.584 - 2.596: 98.3464% ( 3) 00:10:30.276 2.596 - 2.607: 98.3689% ( 3) 00:10:30.276 2.607 - 2.619: 98.3764% ( 1) 00:10:30.276 2.619 - 2.631: 98.3915% ( 2) 00:10:30.276 2.631 - 2.643: 98.4065% ( 2) 00:10:30.276 2.643 - 2.655: 98.4140% ( 1) 00:10:30.276 2.667 - 2.679: 98.4215% ( 1) 00:10:30.276 2.679 - 2.690: 98.4290% ( 1) 00:10:30.276 2.690 - 2.702: 98.4366% ( 1) 00:10:30.276 2.702 - 2.714: 98.4516% ( 2) 00:10:30.276 2.714 - 2.726: 98.4591% ( 1) 00:10:30.276 2.726 - 2.738: 98.4666% ( 1) 00:10:30.276 2.738 - 2.750: 98.4741% ( 1) 00:10:30.276 2.750 - 2.761: 98.4817% ( 1) 00:10:30.276 2.809 - 2.821: 98.4892% ( 1) 00:10:30.276 2.856 - 2.868: 98.5042% ( 2) 00:10:30.276 2.868 - 2.880: 98.5117% ( 1) 00:10:30.276 2.880 - 2.892: 98.5192% ( 1) 00:10:30.276 2.892 - 2.904: 98.5343% ( 2) 00:10:30.276 2.927 - 2.939: 98.5418% ( 1) 00:10:30.276 2.951 - 2.963: 98.5568% ( 2) 00:10:30.276 2.975 - 2.987: 98.5643% ( 1) 00:10:30.276 2.987 - 2.999: 98.5869% ( 3) 00:10:30.276 3.010 - 3.022: 98.5944% ( 1) 00:10:30.276 3.034 - 3.058: 98.6019% ( 1) 00:10:30.276 3.058 - 3.081: 98.6170% ( 2) 00:10:30.276 3.295 - 3.319: 98.6245% ( 1) 00:10:30.276 3.342 - 3.366: 98.6320% ( 1) 00:10:30.276 3.413 - 3.437: 98.6470% ( 2) 00:10:30.276 3.484 - 3.508: 98.6545% ( 1) 00:10:30.276 3.508 - 3.532: 98.6621% ( 1) 00:10:30.276 3.532 - 3.556: 98.6696% ( 1) 00:10:30.276 3.579 - 3.603: 98.6846% ( 2) 00:10:30.276 3.627 - 3.650: 98.6996% ( 2) 00:10:30.276 3.650 - 3.674: 98.7072% ( 1) 00:10:30.276 3.674 - 3.698: 98.7222% ( 2) 00:10:30.276 3.721 - 3.745: 98.7297% ( 1) 00:10:30.276 3.745 - 3.769: 98.7372% ( 1) 00:10:30.276 3.769 - 3.793: 98.7447% ( 1) 00:10:30.276 3.793 - 3.816: 98.7523% ( 1) 00:10:30.276 3.816 - 3.840: 98.7673% ( 2) 00:10:30.276 3.840 - 3.864: 98.7823% ( 2) 00:10:30.276 4.006 - 4.030: 98.7974% ( 2) 00:10:30.276 4.053 - 4.077: 98.8049% ( 1) 00:10:30.276 4.077 - 4.101: 98.8124% ( 1) 00:10:30.276 4.101 - 4.124: 98.8199% ( 1) 00:10:30.276 4.196 - 4.219: 98.8274% ( 1) 00:10:30.276 4.219 - 4.243: 98.8425% ( 2) 00:10:30.276 4.622 - 4.646: 98.8500% ( 1) 00:10:30.276 6.163 - 6.210: 98.8575% ( 1) 00:10:30.276 6.210 - 6.258: 98.8650% ( 1) 00:10:30.276 6.447 - 6.495: 98.8800% ( 2) 00:10:30.276 6.495 - 6.542: 98.8876% ( 1) 00:10:30.276 6.542 - 6.590: 98.8951% ( 1) 00:10:30.276 6.684 - 6.732: 98.9026% ( 1) 00:10:30.276 6.732 - 6.779: 98.9101% ( 1) 00:10:30.276 6.969 - 7.016: 98.9176% ( 1) 00:10:30.276 7.159 - 7.206: 98.9251% ( 1) 00:10:30.276 7.253 - 7.301: 98.9327% ( 1) 00:10:30.276 7.443 - 7.490: 98.9402% ( 1) 00:10:30.276 7.822 - 7.870: 98.9477% ( 1) 00:10:30.276 7.917 - 7.964: 98.9552% ( 1) 00:10:30.276 8.154 - 8.201: 98.9627% ( 1) 00:10:30.276 8.201 - 8.249: 98.9702% ( 1) 00:10:30.276 8.628 - 8.676: 98.9778% ( 1) 00:10:30.276 8.723 - 8.770: 98.9853% ( 1) 00:10:30.276 9.007 - 9.055: 98.9928% ( 1) 00:10:30.276 9.339 - 9.387: 99.0003%[2024-07-15 19:05:10.674645] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:30.535 ( 1) 00:10:30.535 10.999 - 11.046: 99.0078% ( 1) 00:10:30.535 12.089 - 12.136: 99.0153% ( 1) 00:10:30.535 15.550 - 15.644: 99.0229% ( 1) 00:10:30.535 15.644 - 15.739: 99.0304% ( 1) 00:10:30.535 15.739 - 15.834: 99.0454% ( 2) 00:10:30.535 15.834 - 15.929: 99.0529% ( 1) 00:10:30.535 15.929 - 16.024: 99.0905% ( 5) 00:10:30.535 16.024 - 16.119: 99.0980% ( 1) 00:10:30.535 16.119 - 16.213: 99.1130% ( 2) 00:10:30.535 16.213 - 16.308: 99.1431% ( 4) 00:10:30.535 16.308 - 16.403: 99.1732% ( 4) 00:10:30.535 16.403 - 16.498: 99.2183% ( 6) 00:10:30.535 16.498 - 16.593: 99.2483% ( 4) 00:10:30.535 16.687 - 16.782: 99.2934% ( 6) 00:10:30.535 16.782 - 16.877: 99.3085% ( 2) 00:10:30.535 16.877 - 16.972: 99.3310% ( 3) 00:10:30.535 16.972 - 17.067: 99.3385% ( 1) 00:10:30.535 17.067 - 17.161: 99.3536% ( 2) 00:10:30.535 17.161 - 17.256: 99.3761% ( 3) 00:10:30.535 17.256 - 17.351: 99.3836% ( 1) 00:10:30.535 17.541 - 17.636: 99.3912% ( 1) 00:10:30.535 17.920 - 18.015: 99.3987% ( 1) 00:10:30.535 18.110 - 18.204: 99.4062% ( 1) 00:10:30.535 18.394 - 18.489: 99.4137% ( 1) 00:10:30.535 21.997 - 22.092: 99.4212% ( 1) 00:10:30.535 34.702 - 34.892: 99.4287% ( 1) 00:10:30.535 175.218 - 175.976: 99.4363% ( 1) 00:10:30.535 3980.705 - 4004.978: 99.8121% ( 50) 00:10:30.535 4004.978 - 4029.250: 99.9925% ( 24) 00:10:30.535 5995.330 - 6019.603: 100.0000% ( 1) 00:10:30.535 00:10:30.535 19:05:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:30.535 19:05:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:30.535 19:05:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:30.535 19:05:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:30.535 19:05:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:30.792 [ 00:10:30.792 { 00:10:30.792 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:30.792 "subtype": "Discovery", 00:10:30.792 "listen_addresses": [], 00:10:30.792 "allow_any_host": true, 00:10:30.792 "hosts": [] 00:10:30.792 }, 00:10:30.792 { 00:10:30.792 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:30.792 "subtype": "NVMe", 00:10:30.792 "listen_addresses": [ 00:10:30.792 { 00:10:30.792 "trtype": "VFIOUSER", 00:10:30.792 "adrfam": "IPv4", 00:10:30.792 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:30.792 "trsvcid": "0" 00:10:30.792 } 00:10:30.792 ], 00:10:30.792 "allow_any_host": true, 00:10:30.792 "hosts": [], 00:10:30.792 "serial_number": "SPDK1", 00:10:30.792 "model_number": "SPDK bdev Controller", 00:10:30.792 "max_namespaces": 32, 00:10:30.792 "min_cntlid": 1, 00:10:30.792 "max_cntlid": 65519, 00:10:30.792 "namespaces": [ 00:10:30.792 { 00:10:30.792 "nsid": 1, 00:10:30.792 "bdev_name": "Malloc1", 00:10:30.792 "name": "Malloc1", 00:10:30.792 "nguid": "88006F59288C4DBD8F3BBC3FA75F4B09", 00:10:30.792 "uuid": "88006f59-288c-4dbd-8f3b-bc3fa75f4b09" 00:10:30.792 }, 00:10:30.792 { 00:10:30.792 "nsid": 2, 00:10:30.792 "bdev_name": "Malloc3", 00:10:30.792 "name": "Malloc3", 00:10:30.792 "nguid": "C19A3450C59A41B3949E8841B73EAC9A", 00:10:30.792 "uuid": "c19a3450-c59a-41b3-949e-8841b73eac9a" 00:10:30.792 } 00:10:30.792 ] 00:10:30.792 }, 00:10:30.792 { 00:10:30.792 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:30.792 "subtype": "NVMe", 00:10:30.792 "listen_addresses": [ 00:10:30.792 { 00:10:30.792 "trtype": "VFIOUSER", 00:10:30.792 "adrfam": "IPv4", 00:10:30.792 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:30.792 "trsvcid": "0" 00:10:30.792 } 00:10:30.792 ], 00:10:30.792 "allow_any_host": true, 00:10:30.792 "hosts": [], 00:10:30.792 "serial_number": "SPDK2", 00:10:30.792 "model_number": "SPDK bdev Controller", 00:10:30.792 "max_namespaces": 32, 00:10:30.792 "min_cntlid": 1, 00:10:30.792 "max_cntlid": 65519, 00:10:30.792 "namespaces": [ 00:10:30.792 { 00:10:30.792 "nsid": 1, 00:10:30.792 "bdev_name": "Malloc2", 00:10:30.792 "name": "Malloc2", 00:10:30.792 "nguid": "DAF8B823D1BF4B81B5759C27A7E581FF", 00:10:30.792 "uuid": "daf8b823-d1bf-4b81-b575-9c27a7e581ff" 00:10:30.792 } 00:10:30.792 ] 00:10:30.792 } 00:10:30.792 ] 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3253864 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:30.792 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:30.792 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.792 [2024-07-15 19:05:11.170381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.098 Malloc4 00:10:31.098 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:31.098 [2024-07-15 19:05:11.524993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.357 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:31.357 Asynchronous Event Request test 00:10:31.357 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:31.357 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:31.357 Registering asynchronous event callbacks... 00:10:31.357 Starting namespace attribute notice tests for all controllers... 00:10:31.357 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:31.357 aer_cb - Changed Namespace 00:10:31.357 Cleaning up... 00:10:31.357 [ 00:10:31.357 { 00:10:31.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:31.357 "subtype": "Discovery", 00:10:31.357 "listen_addresses": [], 00:10:31.357 "allow_any_host": true, 00:10:31.357 "hosts": [] 00:10:31.357 }, 00:10:31.357 { 00:10:31.357 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:31.357 "subtype": "NVMe", 00:10:31.357 "listen_addresses": [ 00:10:31.357 { 00:10:31.357 "trtype": "VFIOUSER", 00:10:31.357 "adrfam": "IPv4", 00:10:31.357 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:31.357 "trsvcid": "0" 00:10:31.357 } 00:10:31.357 ], 00:10:31.357 "allow_any_host": true, 00:10:31.357 "hosts": [], 00:10:31.357 "serial_number": "SPDK1", 00:10:31.357 "model_number": "SPDK bdev Controller", 00:10:31.357 "max_namespaces": 32, 00:10:31.357 "min_cntlid": 1, 00:10:31.357 "max_cntlid": 65519, 00:10:31.357 "namespaces": [ 00:10:31.357 { 00:10:31.357 "nsid": 1, 00:10:31.357 "bdev_name": "Malloc1", 00:10:31.357 "name": "Malloc1", 00:10:31.357 "nguid": "88006F59288C4DBD8F3BBC3FA75F4B09", 00:10:31.357 "uuid": "88006f59-288c-4dbd-8f3b-bc3fa75f4b09" 00:10:31.357 }, 00:10:31.357 { 00:10:31.357 "nsid": 2, 00:10:31.357 "bdev_name": "Malloc3", 00:10:31.357 "name": "Malloc3", 00:10:31.357 "nguid": "C19A3450C59A41B3949E8841B73EAC9A", 00:10:31.357 "uuid": "c19a3450-c59a-41b3-949e-8841b73eac9a" 00:10:31.357 } 00:10:31.357 ] 00:10:31.357 }, 00:10:31.357 { 00:10:31.357 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:31.357 "subtype": "NVMe", 00:10:31.357 "listen_addresses": [ 00:10:31.357 { 00:10:31.357 "trtype": "VFIOUSER", 00:10:31.357 "adrfam": "IPv4", 00:10:31.357 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:31.357 "trsvcid": "0" 00:10:31.357 } 00:10:31.357 ], 00:10:31.357 "allow_any_host": true, 00:10:31.357 "hosts": [], 00:10:31.357 "serial_number": "SPDK2", 00:10:31.357 "model_number": "SPDK bdev Controller", 00:10:31.357 "max_namespaces": 32, 00:10:31.357 "min_cntlid": 1, 00:10:31.357 "max_cntlid": 65519, 00:10:31.357 "namespaces": [ 00:10:31.357 { 00:10:31.357 "nsid": 1, 00:10:31.357 "bdev_name": "Malloc2", 00:10:31.357 "name": "Malloc2", 00:10:31.357 "nguid": "DAF8B823D1BF4B81B5759C27A7E581FF", 00:10:31.357 "uuid": "daf8b823-d1bf-4b81-b575-9c27a7e581ff" 00:10:31.357 }, 00:10:31.357 { 00:10:31.357 "nsid": 2, 00:10:31.357 "bdev_name": "Malloc4", 00:10:31.357 "name": "Malloc4", 00:10:31.357 "nguid": "9C3052D8663E4474AE4BBD05C2FC9C92", 00:10:31.357 "uuid": "9c3052d8-663e-4474-ae4b-bd05c2fc9c92" 00:10:31.357 } 00:10:31.357 ] 00:10:31.357 } 00:10:31.357 ] 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3253864 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3248236 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3248236 ']' 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3248236 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3248236 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3248236' 00:10:31.615 killing process with pid 3248236 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3248236 00:10:31.615 19:05:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3248236 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3254004 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3254004' 00:10:31.872 Process pid: 3254004 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3254004 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3254004 ']' 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.872 19:05:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:31.872 [2024-07-15 19:05:12.269466] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:31.872 [2024-07-15 19:05:12.270519] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:10:31.872 [2024-07-15 19:05:12.270575] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.872 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.130 [2024-07-15 19:05:12.335556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.130 [2024-07-15 19:05:12.454740] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.130 [2024-07-15 19:05:12.454795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.130 [2024-07-15 19:05:12.454812] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.130 [2024-07-15 19:05:12.454825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.130 [2024-07-15 19:05:12.454837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.130 [2024-07-15 19:05:12.454935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.130 [2024-07-15 19:05:12.454968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.130 [2024-07-15 19:05:12.455017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.130 [2024-07-15 19:05:12.455021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.387 [2024-07-15 19:05:12.574665] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:32.387 [2024-07-15 19:05:12.574971] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:32.387 [2024-07-15 19:05:12.575935] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:32.387 [2024-07-15 19:05:12.576177] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:32.387 [2024-07-15 19:05:12.579125] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:32.953 19:05:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.953 19:05:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:32.953 19:05:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:33.885 19:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:34.143 19:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:34.143 19:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:34.143 19:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:34.143 19:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:34.143 19:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:34.710 Malloc1 00:10:34.710 19:05:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:34.970 19:05:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:34.970 19:05:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:35.229 19:05:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:35.229 19:05:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:35.229 19:05:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:35.795 Malloc2 00:10:35.795 19:05:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:36.054 19:05:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:36.054 19:05:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:36.330 19:05:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:36.330 19:05:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3254004 00:10:36.330 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3254004 ']' 00:10:36.330 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3254004 00:10:36.330 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:36.330 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.331 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3254004 00:10:36.331 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.331 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.331 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3254004' 00:10:36.331 killing process with pid 3254004 00:10:36.331 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3254004 00:10:36.331 19:05:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3254004 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:36.897 00:10:36.897 real 0m53.797s 00:10:36.897 user 3m31.937s 00:10:36.897 sys 0m4.649s 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:36.897 ************************************ 00:10:36.897 END TEST nvmf_vfio_user 00:10:36.897 ************************************ 00:10:36.897 19:05:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:36.897 19:05:17 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:36.897 19:05:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:36.897 19:05:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.897 19:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.897 ************************************ 00:10:36.897 START TEST nvmf_vfio_user_nvme_compliance 00:10:36.897 ************************************ 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:36.897 * Looking for test storage... 00:10:36.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.897 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3254730 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3254730' 00:10:36.898 Process pid: 3254730 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3254730 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3254730 ']' 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.898 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:36.898 [2024-07-15 19:05:17.263853] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:10:36.898 [2024-07-15 19:05:17.263970] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.898 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.898 [2024-07-15 19:05:17.322370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.158 [2024-07-15 19:05:17.429572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.158 [2024-07-15 19:05:17.429627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.158 [2024-07-15 19:05:17.429655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.158 [2024-07-15 19:05:17.429667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.158 [2024-07-15 19:05:17.429676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.158 [2024-07-15 19:05:17.429750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.158 [2024-07-15 19:05:17.429834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.158 [2024-07-15 19:05:17.429838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.158 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.158 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:37.158 19:05:17 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:38.536 malloc0 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.536 19:05:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:38.536 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.536 00:10:38.536 00:10:38.536 CUnit - A unit testing framework for C - Version 2.1-3 00:10:38.536 http://cunit.sourceforge.net/ 00:10:38.536 00:10:38.536 00:10:38.536 Suite: nvme_compliance 00:10:38.536 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 19:05:18.770116] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:38.536 [2024-07-15 19:05:18.771544] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:38.536 [2024-07-15 19:05:18.771567] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:38.536 [2024-07-15 19:05:18.771593] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:38.536 [2024-07-15 19:05:18.773136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:38.536 passed 00:10:38.536 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 19:05:18.857709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:38.536 [2024-07-15 19:05:18.862744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:38.536 passed 00:10:38.536 Test: admin_identify_ns ...[2024-07-15 19:05:18.949391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:38.795 [2024-07-15 19:05:19.008897] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:38.795 [2024-07-15 19:05:19.016910] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:38.795 [2024-07-15 19:05:19.038024] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:38.795 passed 00:10:38.795 Test: admin_get_features_mandatory_features ...[2024-07-15 19:05:19.121066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:38.795 [2024-07-15 19:05:19.125088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:38.795 passed 00:10:38.795 Test: admin_get_features_optional_features ...[2024-07-15 19:05:19.209645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:38.795 [2024-07-15 19:05:19.212661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.054 passed 00:10:39.054 Test: admin_set_features_number_of_queues ...[2024-07-15 19:05:19.297985] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.054 [2024-07-15 19:05:19.400995] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.054 passed 00:10:39.312 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 19:05:19.487127] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.312 [2024-07-15 19:05:19.490155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.312 passed 00:10:39.312 Test: admin_get_log_page_with_lpo ...[2024-07-15 19:05:19.572489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.312 [2024-07-15 19:05:19.639898] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:39.312 [2024-07-15 19:05:19.652979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.312 passed 00:10:39.312 Test: fabric_property_get ...[2024-07-15 19:05:19.736653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.312 [2024-07-15 19:05:19.737960] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:39.312 [2024-07-15 19:05:19.739676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.571 passed 00:10:39.571 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 19:05:19.824256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.571 [2024-07-15 19:05:19.825535] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:39.571 [2024-07-15 19:05:19.827278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.571 passed 00:10:39.571 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 19:05:19.909484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.571 [2024-07-15 19:05:19.993888] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:39.829 [2024-07-15 19:05:20.009889] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:39.830 [2024-07-15 19:05:20.015005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.830 passed 00:10:39.830 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 19:05:20.098197] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.830 [2024-07-15 19:05:20.099532] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:39.830 [2024-07-15 19:05:20.101222] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:39.830 passed 00:10:39.830 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 19:05:20.184284] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:39.830 [2024-07-15 19:05:20.259907] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:40.087 [2024-07-15 19:05:20.283883] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:40.087 [2024-07-15 19:05:20.289062] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.087 passed 00:10:40.087 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 19:05:20.372164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.087 [2024-07-15 19:05:20.373450] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:40.087 [2024-07-15 19:05:20.373506] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:40.087 [2024-07-15 19:05:20.375186] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.087 passed 00:10:40.087 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 19:05:20.458335] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.344 [2024-07-15 19:05:20.549903] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:40.344 [2024-07-15 19:05:20.557905] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:40.344 [2024-07-15 19:05:20.565901] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:40.344 [2024-07-15 19:05:20.573890] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:40.345 [2024-07-15 19:05:20.602997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.345 passed 00:10:40.345 Test: admin_create_io_sq_verify_pc ...[2024-07-15 19:05:20.687069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:40.345 [2024-07-15 19:05:20.704900] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:40.345 [2024-07-15 19:05:20.722443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:40.345 passed 00:10:40.603 Test: admin_create_io_qp_max_qps ...[2024-07-15 19:05:20.803996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:41.537 [2024-07-15 19:05:21.903893] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:42.106 [2024-07-15 19:05:22.279682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.106 passed 00:10:42.106 Test: admin_create_io_sq_shared_cq ...[2024-07-15 19:05:22.361627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:42.106 [2024-07-15 19:05:22.493886] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:42.106 [2024-07-15 19:05:22.530978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:42.366 passed 00:10:42.366 00:10:42.366 Run Summary: Type Total Ran Passed Failed Inactive 00:10:42.366 suites 1 1 n/a 0 0 00:10:42.366 tests 18 18 18 0 0 00:10:42.366 asserts 360 360 360 0 n/a 00:10:42.366 00:10:42.366 Elapsed time = 1.559 seconds 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3254730 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3254730 ']' 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3254730 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3254730 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3254730' 00:10:42.366 killing process with pid 3254730 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3254730 00:10:42.366 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3254730 00:10:42.625 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:42.625 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:42.625 00:10:42.625 real 0m5.761s 00:10:42.625 user 0m16.161s 00:10:42.625 sys 0m0.531s 00:10:42.625 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.625 19:05:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.625 ************************************ 00:10:42.625 END TEST nvmf_vfio_user_nvme_compliance 00:10:42.625 ************************************ 00:10:42.625 19:05:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:42.625 19:05:22 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:42.625 19:05:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:42.625 19:05:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.625 19:05:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.625 ************************************ 00:10:42.625 START TEST nvmf_vfio_user_fuzz 00:10:42.625 ************************************ 00:10:42.625 19:05:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:42.625 * Looking for test storage... 00:10:42.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3255450 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3255450' 00:10:42.625 Process pid: 3255450 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3255450 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3255450 ']' 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.625 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.626 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.626 19:05:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:44.003 19:05:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.003 19:05:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:44.003 19:05:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:44.935 malloc0 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:44.935 19:05:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:17.025 Fuzzing completed. Shutting down the fuzz application 00:11:17.025 00:11:17.025 Dumping successful admin opcodes: 00:11:17.025 8, 9, 10, 24, 00:11:17.025 Dumping successful io opcodes: 00:11:17.025 0, 00:11:17.025 NS: 0x200003a1ef00 I/O qp, Total commands completed: 622974, total successful commands: 2410, random_seed: 1304524224 00:11:17.025 NS: 0x200003a1ef00 admin qp, Total commands completed: 80690, total successful commands: 639, random_seed: 1895854720 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3255450 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3255450 ']' 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3255450 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3255450 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3255450' 00:11:17.025 killing process with pid 3255450 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3255450 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3255450 00:11:17.025 19:05:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:17.025 19:05:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:17.025 00:11:17.025 real 0m34.070s 00:11:17.025 user 0m34.592s 00:11:17.025 sys 0m30.300s 00:11:17.025 19:05:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.025 19:05:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:17.025 ************************************ 00:11:17.025 END TEST nvmf_vfio_user_fuzz 00:11:17.025 ************************************ 00:11:17.025 19:05:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:17.025 19:05:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:17.025 19:05:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.025 19:05:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.025 19:05:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.025 ************************************ 00:11:17.025 START TEST nvmf_host_management 00:11:17.025 ************************************ 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:17.025 * Looking for test storage... 00:11:17.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.025 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.026 19:05:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.026 19:05:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.026 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.026 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.026 19:05:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.026 19:05:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.966 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:18.967 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:18.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:18.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:18.967 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.967 19:05:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:11:18.967 00:11:18.967 --- 10.0.0.2 ping statistics --- 00:11:18.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.967 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:11:18.967 00:11:18.967 --- 10.0.0.1 ping statistics --- 00:11:18.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.967 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3261046 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3261046 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3261046 ']' 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.967 19:05:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.967 [2024-07-15 19:05:59.099712] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:18.967 [2024-07-15 19:05:59.099785] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.967 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.967 [2024-07-15 19:05:59.164462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.967 [2024-07-15 19:05:59.281844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.967 [2024-07-15 19:05:59.281913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.967 [2024-07-15 19:05:59.281930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.967 [2024-07-15 19:05:59.281943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.967 [2024-07-15 19:05:59.281955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.967 [2024-07-15 19:05:59.282050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.967 [2024-07-15 19:05:59.282144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.967 [2024-07-15 19:05:59.282213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:18.967 [2024-07-15 19:05:59.282216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.906 [2024-07-15 19:06:00.056742] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.906 Malloc0 00:11:19.906 [2024-07-15 19:06:00.115905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3261240 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3261240 /var/tmp/bdevperf.sock 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3261240 ']' 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:19.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:19.906 { 00:11:19.906 "params": { 00:11:19.906 "name": "Nvme$subsystem", 00:11:19.906 "trtype": "$TEST_TRANSPORT", 00:11:19.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.906 "adrfam": "ipv4", 00:11:19.906 "trsvcid": "$NVMF_PORT", 00:11:19.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.906 "hdgst": ${hdgst:-false}, 00:11:19.906 "ddgst": ${ddgst:-false} 00:11:19.906 }, 00:11:19.906 "method": "bdev_nvme_attach_controller" 00:11:19.906 } 00:11:19.906 EOF 00:11:19.906 )") 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:19.906 19:06:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:19.906 "params": { 00:11:19.906 "name": "Nvme0", 00:11:19.906 "trtype": "tcp", 00:11:19.906 "traddr": "10.0.0.2", 00:11:19.906 "adrfam": "ipv4", 00:11:19.906 "trsvcid": "4420", 00:11:19.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:19.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:19.906 "hdgst": false, 00:11:19.906 "ddgst": false 00:11:19.906 }, 00:11:19.906 "method": "bdev_nvme_attach_controller" 00:11:19.906 }' 00:11:19.906 [2024-07-15 19:06:00.189810] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:19.906 [2024-07-15 19:06:00.189927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261240 ] 00:11:19.906 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.906 [2024-07-15 19:06:00.252709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.164 [2024-07-15 19:06:00.364078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.425 Running I/O for 10 seconds... 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:11:20.425 19:06:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.685 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.685 [2024-07-15 19:06:01.055402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.685 [2024-07-15 19:06:01.055785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.685 [2024-07-15 19:06:01.055799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.055814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.055828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.055843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.055856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.055871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.055900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.055917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.055930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.055946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.055959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.055978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.055992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.056982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.056997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.057011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.057026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.057040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.686 [2024-07-15 19:06:01.057055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.686 [2024-07-15 19:06:01.057069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:20.687 [2024-07-15 19:06:01.057363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.687 [2024-07-15 19:06:01.057444] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17cf900 was disconnected and freed. reset controller. 00:11:20.687 [2024-07-15 19:06:01.058616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:20.687 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.687 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:20.687 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.687 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:20.687 task offset: 55296 on job bdev=Nvme0n1 fails 00:11:20.687 00:11:20.687 Latency(us) 00:11:20.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.687 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:20.687 Job: Nvme0n1 ended in about 0.39 seconds with error 00:11:20.687 Verification LBA range: start 0x0 length 0x400 00:11:20.687 Nvme0n1 : 0.39 996.08 62.25 166.01 0.00 53562.42 2827.76 48545.19 00:11:20.687 =================================================================================================================== 00:11:20.687 Total : 996.08 62.25 166.01 0.00 53562.42 2827.76 48545.19 00:11:20.687 [2024-07-15 19:06:01.060522] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:20.687 [2024-07-15 19:06:01.060567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13be790 (9): Bad file descriptor 00:11:20.687 19:06:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.687 19:06:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:20.687 [2024-07-15 19:06:01.081354] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3261240 00:11:22.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3261240) - No such process 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.064 { 00:11:22.064 "params": { 00:11:22.064 "name": "Nvme$subsystem", 00:11:22.064 "trtype": "$TEST_TRANSPORT", 00:11:22.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.064 "adrfam": "ipv4", 00:11:22.064 "trsvcid": "$NVMF_PORT", 00:11:22.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.064 "hdgst": ${hdgst:-false}, 00:11:22.064 "ddgst": ${ddgst:-false} 00:11:22.064 }, 00:11:22.064 "method": "bdev_nvme_attach_controller" 00:11:22.064 } 00:11:22.064 EOF 00:11:22.064 )") 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:22.064 19:06:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.064 "params": { 00:11:22.064 "name": "Nvme0", 00:11:22.064 "trtype": "tcp", 00:11:22.064 "traddr": "10.0.0.2", 00:11:22.064 "adrfam": "ipv4", 00:11:22.064 "trsvcid": "4420", 00:11:22.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:22.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:22.064 "hdgst": false, 00:11:22.064 "ddgst": false 00:11:22.064 }, 00:11:22.064 "method": "bdev_nvme_attach_controller" 00:11:22.064 }' 00:11:22.064 [2024-07-15 19:06:02.117315] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:22.064 [2024-07-15 19:06:02.117391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261611 ] 00:11:22.064 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.064 [2024-07-15 19:06:02.178928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.064 [2024-07-15 19:06:02.292011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.323 Running I/O for 1 seconds... 00:11:23.260 00:11:23.260 Latency(us) 00:11:23.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.260 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:23.260 Verification LBA range: start 0x0 length 0x400 00:11:23.260 Nvme0n1 : 1.02 1383.42 86.46 0.00 0.00 45571.36 9466.31 40389.59 00:11:23.260 =================================================================================================================== 00:11:23.260 Total : 1383.42 86.46 0.00 0.00 45571.36 9466.31 40389.59 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.519 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.519 rmmod nvme_tcp 00:11:23.776 rmmod nvme_fabrics 00:11:23.776 rmmod nvme_keyring 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3261046 ']' 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3261046 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3261046 ']' 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3261046 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.776 19:06:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3261046 00:11:23.776 19:06:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:23.776 19:06:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:23.776 19:06:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3261046' 00:11:23.776 killing process with pid 3261046 00:11:23.776 19:06:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3261046 00:11:23.776 19:06:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3261046 00:11:24.035 [2024-07-15 19:06:04.274460] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:24.035 19:06:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.938 19:06:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:25.938 19:06:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:25.938 00:11:25.938 real 0m9.256s 00:11:25.938 user 0m23.053s 00:11:25.938 sys 0m2.550s 00:11:25.938 19:06:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.938 19:06:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.938 ************************************ 00:11:25.938 END TEST nvmf_host_management 00:11:25.938 ************************************ 00:11:25.938 19:06:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:25.938 19:06:06 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:25.938 19:06:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:25.938 19:06:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.938 19:06:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.197 ************************************ 00:11:26.197 START TEST nvmf_lvol 00:11:26.197 ************************************ 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:26.197 * Looking for test storage... 00:11:26.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.197 19:06:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.099 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.099 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:11:28.099 00:11:28.099 --- 10.0.0.2 ping statistics --- 00:11:28.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.099 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:28.099 00:11:28.099 --- 10.0.0.1 ping statistics --- 00:11:28.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.099 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3264114 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3264114 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3264114 ']' 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.099 19:06:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:28.099 [2024-07-15 19:06:08.498244] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:28.099 [2024-07-15 19:06:08.498349] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.357 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.357 [2024-07-15 19:06:08.570365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.357 [2024-07-15 19:06:08.690734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.357 [2024-07-15 19:06:08.690794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.357 [2024-07-15 19:06:08.690811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.357 [2024-07-15 19:06:08.690824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.357 [2024-07-15 19:06:08.690835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.357 [2024-07-15 19:06:08.690930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.357 [2024-07-15 19:06:08.690979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.357 [2024-07-15 19:06:08.690982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.287 19:06:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.287 19:06:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:29.287 19:06:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.287 19:06:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.287 19:06:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:29.287 19:06:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.287 19:06:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:29.287 [2024-07-15 19:06:09.718893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.547 19:06:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:29.805 19:06:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:29.805 19:06:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:30.062 19:06:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:30.062 19:06:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:30.320 19:06:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:30.576 19:06:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6746a8a1-1782-4fde-b5ec-189d6502711d 00:11:30.576 19:06:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6746a8a1-1782-4fde-b5ec-189d6502711d lvol 20 00:11:30.832 19:06:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c6d8552d-7e72-4cbc-a35f-7c415c9ed785 00:11:30.832 19:06:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:31.088 19:06:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6d8552d-7e72-4cbc-a35f-7c415c9ed785 00:11:31.345 19:06:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:31.601 [2024-07-15 19:06:11.783927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.601 19:06:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:31.887 19:06:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3264744 00:11:31.887 19:06:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:31.887 19:06:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:31.887 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.817 19:06:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c6d8552d-7e72-4cbc-a35f-7c415c9ed785 MY_SNAPSHOT 00:11:33.074 19:06:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=117a9c47-4c9f-4d77-8903-dd60188ae207 00:11:33.074 19:06:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c6d8552d-7e72-4cbc-a35f-7c415c9ed785 30 00:11:33.373 19:06:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 117a9c47-4c9f-4d77-8903-dd60188ae207 MY_CLONE 00:11:33.630 19:06:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=86815ae1-ebf6-415e-822b-4dd32be68819 00:11:33.631 19:06:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 86815ae1-ebf6-415e-822b-4dd32be68819 00:11:34.198 19:06:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3264744 00:11:42.319 Initializing NVMe Controllers 00:11:42.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:42.320 Controller IO queue size 128, less than required. 00:11:42.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:42.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:42.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:42.320 Initialization complete. Launching workers. 00:11:42.320 ======================================================== 00:11:42.320 Latency(us) 00:11:42.320 Device Information : IOPS MiB/s Average min max 00:11:42.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10774.20 42.09 11885.00 1376.57 66636.64 00:11:42.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10597.80 41.40 12084.44 1999.88 72506.48 00:11:42.320 ======================================================== 00:11:42.320 Total : 21372.00 83.48 11983.90 1376.57 72506.48 00:11:42.320 00:11:42.320 19:06:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:42.577 19:06:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6d8552d-7e72-4cbc-a35f-7c415c9ed785 00:11:42.835 19:06:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6746a8a1-1782-4fde-b5ec-189d6502711d 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.093 rmmod nvme_tcp 00:11:43.093 rmmod nvme_fabrics 00:11:43.093 rmmod nvme_keyring 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3264114 ']' 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3264114 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3264114 ']' 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3264114 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.093 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3264114 00:11:43.351 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.351 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.351 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3264114' 00:11:43.351 killing process with pid 3264114 00:11:43.351 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3264114 00:11:43.351 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3264114 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.611 19:06:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.521 19:06:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:45.521 00:11:45.521 real 0m19.499s 00:11:45.521 user 1m7.229s 00:11:45.521 sys 0m5.370s 00:11:45.521 19:06:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.521 19:06:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:45.521 ************************************ 00:11:45.521 END TEST nvmf_lvol 00:11:45.521 ************************************ 00:11:45.521 19:06:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:45.521 19:06:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:45.521 19:06:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:45.521 19:06:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.521 19:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.521 ************************************ 00:11:45.521 START TEST nvmf_lvs_grow 00:11:45.521 ************************************ 00:11:45.521 19:06:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:45.782 * Looking for test storage... 00:11:45.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.782 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.783 19:06:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.783 19:06:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:47.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:47.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:47.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:47.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.688 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.689 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.689 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.689 19:06:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:11:47.689 00:11:47.689 --- 10.0.0.2 ping statistics --- 00:11:47.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.689 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:11:47.689 00:11:47.689 --- 10.0.0.1 ping statistics --- 00:11:47.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.689 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3268014 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3268014 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3268014 ']' 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.689 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:47.689 [2024-07-15 19:06:28.114576] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:47.689 [2024-07-15 19:06:28.114654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.948 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.948 [2024-07-15 19:06:28.190608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.948 [2024-07-15 19:06:28.309988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.948 [2024-07-15 19:06:28.310050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.948 [2024-07-15 19:06:28.310066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.948 [2024-07-15 19:06:28.310079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.948 [2024-07-15 19:06:28.310091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.948 [2024-07-15 19:06:28.310124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.206 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.206 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:48.206 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.206 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.206 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:48.206 19:06:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.206 19:06:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:48.463 [2024-07-15 19:06:28.681532] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:48.463 ************************************ 00:11:48.463 START TEST lvs_grow_clean 00:11:48.463 ************************************ 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.463 19:06:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:48.720 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:48.720 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:48.977 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f130774a-4a6d-4a82-9f70-b004a1b71704 00:11:48.977 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:11:48.977 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:49.234 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:49.234 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:49.234 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f130774a-4a6d-4a82-9f70-b004a1b71704 lvol 150 00:11:49.496 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e6c6e60-bac4-4101-b38f-1dc74c0db2e3 00:11:49.496 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:49.496 19:06:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:49.754 [2024-07-15 19:06:30.029143] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:49.754 [2024-07-15 19:06:30.029294] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:49.754 true 00:11:49.754 19:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:11:49.754 19:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:50.013 19:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:50.013 19:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:50.271 19:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e6c6e60-bac4-4101-b38f-1dc74c0db2e3 00:11:50.529 19:06:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:50.786 [2024-07-15 19:06:31.144527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.786 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3268449 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3268449 /var/tmp/bdevperf.sock 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3268449 ']' 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:51.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.044 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:51.302 [2024-07-15 19:06:31.499207] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:11:51.303 [2024-07-15 19:06:31.499290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268449 ] 00:11:51.303 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.303 [2024-07-15 19:06:31.560919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.303 [2024-07-15 19:06:31.677288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.562 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.562 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:51.562 19:06:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:51.820 Nvme0n1 00:11:51.820 19:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:52.077 [ 00:11:52.077 { 00:11:52.077 "name": "Nvme0n1", 00:11:52.077 "aliases": [ 00:11:52.077 "7e6c6e60-bac4-4101-b38f-1dc74c0db2e3" 00:11:52.077 ], 00:11:52.077 "product_name": "NVMe disk", 00:11:52.077 "block_size": 4096, 00:11:52.077 "num_blocks": 38912, 00:11:52.077 "uuid": "7e6c6e60-bac4-4101-b38f-1dc74c0db2e3", 00:11:52.077 "assigned_rate_limits": { 00:11:52.077 "rw_ios_per_sec": 0, 00:11:52.077 "rw_mbytes_per_sec": 0, 00:11:52.077 "r_mbytes_per_sec": 0, 00:11:52.077 "w_mbytes_per_sec": 0 00:11:52.077 }, 00:11:52.077 "claimed": false, 00:11:52.077 "zoned": false, 00:11:52.077 "supported_io_types": { 00:11:52.077 "read": true, 00:11:52.077 "write": true, 00:11:52.077 "unmap": true, 00:11:52.077 "flush": true, 00:11:52.077 "reset": true, 00:11:52.077 "nvme_admin": true, 00:11:52.077 "nvme_io": true, 00:11:52.077 "nvme_io_md": false, 00:11:52.077 "write_zeroes": true, 00:11:52.077 "zcopy": false, 00:11:52.077 "get_zone_info": false, 00:11:52.077 "zone_management": false, 00:11:52.077 "zone_append": false, 00:11:52.077 "compare": true, 00:11:52.077 "compare_and_write": true, 00:11:52.077 "abort": true, 00:11:52.077 "seek_hole": false, 00:11:52.077 "seek_data": false, 00:11:52.077 "copy": true, 00:11:52.077 "nvme_iov_md": false 00:11:52.077 }, 00:11:52.077 "memory_domains": [ 00:11:52.077 { 00:11:52.077 "dma_device_id": "system", 00:11:52.077 "dma_device_type": 1 00:11:52.077 } 00:11:52.077 ], 00:11:52.077 "driver_specific": { 00:11:52.077 "nvme": [ 00:11:52.077 { 00:11:52.077 "trid": { 00:11:52.077 "trtype": "TCP", 00:11:52.077 "adrfam": "IPv4", 00:11:52.077 "traddr": "10.0.0.2", 00:11:52.078 "trsvcid": "4420", 00:11:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:52.078 }, 00:11:52.078 "ctrlr_data": { 00:11:52.078 "cntlid": 1, 00:11:52.078 "vendor_id": "0x8086", 00:11:52.078 "model_number": "SPDK bdev Controller", 00:11:52.078 "serial_number": "SPDK0", 00:11:52.078 "firmware_revision": "24.09", 00:11:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:52.078 "oacs": { 00:11:52.078 "security": 0, 00:11:52.078 "format": 0, 00:11:52.078 "firmware": 0, 00:11:52.078 "ns_manage": 0 00:11:52.078 }, 00:11:52.078 "multi_ctrlr": true, 00:11:52.078 "ana_reporting": false 00:11:52.078 }, 00:11:52.078 "vs": { 00:11:52.078 "nvme_version": "1.3" 00:11:52.078 }, 00:11:52.078 "ns_data": { 00:11:52.078 "id": 1, 00:11:52.078 "can_share": true 00:11:52.078 } 00:11:52.078 } 00:11:52.078 ], 00:11:52.078 "mp_policy": "active_passive" 00:11:52.078 } 00:11:52.078 } 00:11:52.078 ] 00:11:52.078 19:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3268585 00:11:52.078 19:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:52.078 19:06:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:52.337 Running I/O for 10 seconds... 00:11:53.296 Latency(us) 00:11:53.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.296 Nvme0n1 : 1.00 14091.00 55.04 0.00 0.00 0.00 0.00 0.00 00:11:53.296 =================================================================================================================== 00:11:53.296 Total : 14091.00 55.04 0.00 0.00 0.00 0.00 0.00 00:11:53.296 00:11:54.229 19:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:11:54.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.229 Nvme0n1 : 2.00 14231.50 55.59 0.00 0.00 0.00 0.00 0.00 00:11:54.229 =================================================================================================================== 00:11:54.229 Total : 14231.50 55.59 0.00 0.00 0.00 0.00 0.00 00:11:54.229 00:11:54.487 true 00:11:54.487 19:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:11:54.487 19:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:54.746 19:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:54.746 19:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:54.746 19:06:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3268585 00:11:55.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.313 Nvme0n1 : 3.00 14482.00 56.57 0.00 0.00 0.00 0.00 0.00 00:11:55.313 =================================================================================================================== 00:11:55.313 Total : 14482.00 56.57 0.00 0.00 0.00 0.00 0.00 00:11:55.313 00:11:56.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.246 Nvme0n1 : 4.00 14578.25 56.95 0.00 0.00 0.00 0.00 0.00 00:11:56.246 =================================================================================================================== 00:11:56.246 Total : 14578.25 56.95 0.00 0.00 0.00 0.00 0.00 00:11:56.246 00:11:57.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.179 Nvme0n1 : 5.00 14616.40 57.10 0.00 0.00 0.00 0.00 0.00 00:11:57.179 =================================================================================================================== 00:11:57.179 Total : 14616.40 57.10 0.00 0.00 0.00 0.00 0.00 00:11:57.179 00:11:58.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.113 Nvme0n1 : 6.00 14714.17 57.48 0.00 0.00 0.00 0.00 0.00 00:11:58.113 =================================================================================================================== 00:11:58.113 Total : 14714.17 57.48 0.00 0.00 0.00 0.00 0.00 00:11:58.113 00:11:59.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.486 Nvme0n1 : 7.00 14757.29 57.65 0.00 0.00 0.00 0.00 0.00 00:11:59.486 =================================================================================================================== 00:11:59.486 Total : 14757.29 57.65 0.00 0.00 0.00 0.00 0.00 00:11:59.486 00:12:00.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.420 Nvme0n1 : 8.00 14764.62 57.67 0.00 0.00 0.00 0.00 0.00 00:12:00.420 =================================================================================================================== 00:12:00.420 Total : 14764.62 57.67 0.00 0.00 0.00 0.00 0.00 00:12:00.420 00:12:01.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.355 Nvme0n1 : 9.00 14832.22 57.94 0.00 0.00 0.00 0.00 0.00 00:12:01.355 =================================================================================================================== 00:12:01.355 Total : 14832.22 57.94 0.00 0.00 0.00 0.00 0.00 00:12:01.355 00:12:02.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.287 Nvme0n1 : 10.00 14818.60 57.89 0.00 0.00 0.00 0.00 0.00 00:12:02.287 =================================================================================================================== 00:12:02.287 Total : 14818.60 57.89 0.00 0.00 0.00 0.00 0.00 00:12:02.287 00:12:02.287 00:12:02.287 Latency(us) 00:12:02.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.287 Nvme0n1 : 10.01 14817.90 57.88 0.00 0.00 8633.36 2330.17 16602.45 00:12:02.287 =================================================================================================================== 00:12:02.287 Total : 14817.90 57.88 0.00 0.00 8633.36 2330.17 16602.45 00:12:02.287 0 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3268449 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3268449 ']' 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3268449 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268449 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268449' 00:12:02.287 killing process with pid 3268449 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3268449 00:12:02.287 Received shutdown signal, test time was about 10.000000 seconds 00:12:02.287 00:12:02.287 Latency(us) 00:12:02.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.287 =================================================================================================================== 00:12:02.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.287 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3268449 00:12:02.545 19:06:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:02.803 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:03.060 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:12:03.060 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:03.317 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:03.317 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:03.317 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:03.574 [2024-07-15 19:06:43.961653] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:03.574 19:06:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:12:03.831 request: 00:12:03.831 { 00:12:03.831 "uuid": "f130774a-4a6d-4a82-9f70-b004a1b71704", 00:12:03.831 "method": "bdev_lvol_get_lvstores", 00:12:03.831 "req_id": 1 00:12:03.831 } 00:12:03.831 Got JSON-RPC error response 00:12:03.831 response: 00:12:03.831 { 00:12:03.831 "code": -19, 00:12:03.831 "message": "No such device" 00:12:03.831 } 00:12:03.831 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:03.831 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:03.831 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:03.831 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:03.831 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:04.088 aio_bdev 00:12:04.088 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7e6c6e60-bac4-4101-b38f-1dc74c0db2e3 00:12:04.088 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7e6c6e60-bac4-4101-b38f-1dc74c0db2e3 00:12:04.088 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:04.088 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:04.088 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:04.088 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:04.088 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:04.346 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e6c6e60-bac4-4101-b38f-1dc74c0db2e3 -t 2000 00:12:04.604 [ 00:12:04.604 { 00:12:04.604 "name": "7e6c6e60-bac4-4101-b38f-1dc74c0db2e3", 00:12:04.604 "aliases": [ 00:12:04.604 "lvs/lvol" 00:12:04.604 ], 00:12:04.604 "product_name": "Logical Volume", 00:12:04.604 "block_size": 4096, 00:12:04.604 "num_blocks": 38912, 00:12:04.604 "uuid": "7e6c6e60-bac4-4101-b38f-1dc74c0db2e3", 00:12:04.604 "assigned_rate_limits": { 00:12:04.604 "rw_ios_per_sec": 0, 00:12:04.604 "rw_mbytes_per_sec": 0, 00:12:04.604 "r_mbytes_per_sec": 0, 00:12:04.604 "w_mbytes_per_sec": 0 00:12:04.604 }, 00:12:04.604 "claimed": false, 00:12:04.604 "zoned": false, 00:12:04.604 "supported_io_types": { 00:12:04.604 "read": true, 00:12:04.604 "write": true, 00:12:04.604 "unmap": true, 00:12:04.604 "flush": false, 00:12:04.604 "reset": true, 00:12:04.604 "nvme_admin": false, 00:12:04.604 "nvme_io": false, 00:12:04.604 "nvme_io_md": false, 00:12:04.604 "write_zeroes": true, 00:12:04.604 "zcopy": false, 00:12:04.604 "get_zone_info": false, 00:12:04.604 "zone_management": false, 00:12:04.604 "zone_append": false, 00:12:04.604 "compare": false, 00:12:04.604 "compare_and_write": false, 00:12:04.604 "abort": false, 00:12:04.604 "seek_hole": true, 00:12:04.604 "seek_data": true, 00:12:04.604 "copy": false, 00:12:04.604 "nvme_iov_md": false 00:12:04.604 }, 00:12:04.604 "driver_specific": { 00:12:04.604 "lvol": { 00:12:04.604 "lvol_store_uuid": "f130774a-4a6d-4a82-9f70-b004a1b71704", 00:12:04.604 "base_bdev": "aio_bdev", 00:12:04.604 "thin_provision": false, 00:12:04.604 "num_allocated_clusters": 38, 00:12:04.604 "snapshot": false, 00:12:04.604 "clone": false, 00:12:04.604 "esnap_clone": false 00:12:04.604 } 00:12:04.604 } 00:12:04.604 } 00:12:04.604 ] 00:12:04.604 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:04.604 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:12:04.604 19:06:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:04.862 19:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:04.862 19:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:12:04.862 19:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:05.120 19:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:05.120 19:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e6c6e60-bac4-4101-b38f-1dc74c0db2e3 00:12:05.378 19:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f130774a-4a6d-4a82-9f70-b004a1b71704 00:12:05.636 19:06:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:05.894 00:12:05.894 real 0m17.541s 00:12:05.894 user 0m16.996s 00:12:05.894 sys 0m1.910s 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:05.894 ************************************ 00:12:05.894 END TEST lvs_grow_clean 00:12:05.894 ************************************ 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:05.894 ************************************ 00:12:05.894 START TEST lvs_grow_dirty 00:12:05.894 ************************************ 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:05.894 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.152 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.152 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:06.410 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:06.410 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:06.668 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:06.668 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:06.668 19:06:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:06.931 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:06.931 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:06.931 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff lvol 150 00:12:06.931 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=680e0f72-d962-496f-b674-78f6b28c13f1 00:12:06.931 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:06.931 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:07.237 [2024-07-15 19:06:47.585072] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:07.237 [2024-07-15 19:06:47.585181] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:07.237 true 00:12:07.237 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:07.237 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:07.494 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:07.494 19:06:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:07.752 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 680e0f72-d962-496f-b674-78f6b28c13f1 00:12:08.317 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:08.317 [2024-07-15 19:06:48.676371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.317 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.574 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3270503 00:12:08.574 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3270503 /var/tmp/bdevperf.sock 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3270503 ']' 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:08.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.575 19:06:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:08.575 [2024-07-15 19:06:48.982197] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:08.575 [2024-07-15 19:06:48.982270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270503 ] 00:12:08.832 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.832 [2024-07-15 19:06:49.045389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.832 [2024-07-15 19:06:49.161794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.089 19:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.089 19:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:09.089 19:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:09.345 Nvme0n1 00:12:09.345 19:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:09.602 [ 00:12:09.602 { 00:12:09.602 "name": "Nvme0n1", 00:12:09.602 "aliases": [ 00:12:09.602 "680e0f72-d962-496f-b674-78f6b28c13f1" 00:12:09.602 ], 00:12:09.602 "product_name": "NVMe disk", 00:12:09.602 "block_size": 4096, 00:12:09.602 "num_blocks": 38912, 00:12:09.602 "uuid": "680e0f72-d962-496f-b674-78f6b28c13f1", 00:12:09.602 "assigned_rate_limits": { 00:12:09.602 "rw_ios_per_sec": 0, 00:12:09.602 "rw_mbytes_per_sec": 0, 00:12:09.602 "r_mbytes_per_sec": 0, 00:12:09.602 "w_mbytes_per_sec": 0 00:12:09.602 }, 00:12:09.602 "claimed": false, 00:12:09.602 "zoned": false, 00:12:09.602 "supported_io_types": { 00:12:09.602 "read": true, 00:12:09.602 "write": true, 00:12:09.602 "unmap": true, 00:12:09.602 "flush": true, 00:12:09.602 "reset": true, 00:12:09.602 "nvme_admin": true, 00:12:09.602 "nvme_io": true, 00:12:09.602 "nvme_io_md": false, 00:12:09.602 "write_zeroes": true, 00:12:09.602 "zcopy": false, 00:12:09.602 "get_zone_info": false, 00:12:09.602 "zone_management": false, 00:12:09.602 "zone_append": false, 00:12:09.602 "compare": true, 00:12:09.602 "compare_and_write": true, 00:12:09.602 "abort": true, 00:12:09.602 "seek_hole": false, 00:12:09.602 "seek_data": false, 00:12:09.602 "copy": true, 00:12:09.602 "nvme_iov_md": false 00:12:09.602 }, 00:12:09.602 "memory_domains": [ 00:12:09.602 { 00:12:09.602 "dma_device_id": "system", 00:12:09.602 "dma_device_type": 1 00:12:09.602 } 00:12:09.602 ], 00:12:09.602 "driver_specific": { 00:12:09.602 "nvme": [ 00:12:09.602 { 00:12:09.602 "trid": { 00:12:09.602 "trtype": "TCP", 00:12:09.602 "adrfam": "IPv4", 00:12:09.602 "traddr": "10.0.0.2", 00:12:09.602 "trsvcid": "4420", 00:12:09.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:09.602 }, 00:12:09.602 "ctrlr_data": { 00:12:09.602 "cntlid": 1, 00:12:09.602 "vendor_id": "0x8086", 00:12:09.602 "model_number": "SPDK bdev Controller", 00:12:09.602 "serial_number": "SPDK0", 00:12:09.602 "firmware_revision": "24.09", 00:12:09.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:09.602 "oacs": { 00:12:09.602 "security": 0, 00:12:09.602 "format": 0, 00:12:09.602 "firmware": 0, 00:12:09.602 "ns_manage": 0 00:12:09.602 }, 00:12:09.602 "multi_ctrlr": true, 00:12:09.602 "ana_reporting": false 00:12:09.602 }, 00:12:09.602 "vs": { 00:12:09.602 "nvme_version": "1.3" 00:12:09.603 }, 00:12:09.603 "ns_data": { 00:12:09.603 "id": 1, 00:12:09.603 "can_share": true 00:12:09.603 } 00:12:09.603 } 00:12:09.603 ], 00:12:09.603 "mp_policy": "active_passive" 00:12:09.603 } 00:12:09.603 } 00:12:09.603 ] 00:12:09.603 19:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3270639 00:12:09.603 19:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:09.603 19:06:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:09.603 Running I/O for 10 seconds... 00:12:10.534 Latency(us) 00:12:10.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.534 Nvme0n1 : 1.00 14016.00 54.75 0.00 0.00 0.00 0.00 0.00 00:12:10.534 =================================================================================================================== 00:12:10.534 Total : 14016.00 54.75 0.00 0.00 0.00 0.00 0.00 00:12:10.534 00:12:11.465 19:06:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:11.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:11.722 Nvme0n1 : 2.00 14174.50 55.37 0.00 0.00 0.00 0.00 0.00 00:12:11.722 =================================================================================================================== 00:12:11.722 Total : 14174.50 55.37 0.00 0.00 0.00 0.00 0.00 00:12:11.722 00:12:11.722 true 00:12:11.722 19:06:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:11.722 19:06:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:11.980 19:06:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:11.980 19:06:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:11.980 19:06:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3270639 00:12:12.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:12.546 Nvme0n1 : 3.00 14226.67 55.57 0.00 0.00 0.00 0.00 0.00 00:12:12.546 =================================================================================================================== 00:12:12.546 Total : 14226.67 55.57 0.00 0.00 0.00 0.00 0.00 00:12:12.546 00:12:13.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:13.918 Nvme0n1 : 4.00 14333.75 55.99 0.00 0.00 0.00 0.00 0.00 00:12:13.918 =================================================================================================================== 00:12:13.918 Total : 14333.75 55.99 0.00 0.00 0.00 0.00 0.00 00:12:13.918 00:12:14.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.853 Nvme0n1 : 5.00 14359.80 56.09 0.00 0.00 0.00 0.00 0.00 00:12:14.853 =================================================================================================================== 00:12:14.853 Total : 14359.80 56.09 0.00 0.00 0.00 0.00 0.00 00:12:14.853 00:12:15.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.787 Nvme0n1 : 6.00 14441.17 56.41 0.00 0.00 0.00 0.00 0.00 00:12:15.787 =================================================================================================================== 00:12:15.787 Total : 14441.17 56.41 0.00 0.00 0.00 0.00 0.00 00:12:15.787 00:12:16.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.720 Nvme0n1 : 7.00 14508.14 56.67 0.00 0.00 0.00 0.00 0.00 00:12:16.720 =================================================================================================================== 00:12:16.720 Total : 14508.14 56.67 0.00 0.00 0.00 0.00 0.00 00:12:16.720 00:12:17.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.655 Nvme0n1 : 8.00 14518.50 56.71 0.00 0.00 0.00 0.00 0.00 00:12:17.655 =================================================================================================================== 00:12:17.655 Total : 14518.50 56.71 0.00 0.00 0.00 0.00 0.00 00:12:17.655 00:12:18.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.589 Nvme0n1 : 9.00 14561.89 56.88 0.00 0.00 0.00 0.00 0.00 00:12:18.589 =================================================================================================================== 00:12:18.589 Total : 14561.89 56.88 0.00 0.00 0.00 0.00 0.00 00:12:18.589 00:12:19.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.964 Nvme0n1 : 10.00 14571.30 56.92 0.00 0.00 0.00 0.00 0.00 00:12:19.964 =================================================================================================================== 00:12:19.964 Total : 14571.30 56.92 0.00 0.00 0.00 0.00 0.00 00:12:19.964 00:12:19.964 00:12:19.964 Latency(us) 00:12:19.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.964 Nvme0n1 : 10.01 14574.18 56.93 0.00 0.00 8776.25 4975.88 18252.99 00:12:19.964 =================================================================================================================== 00:12:19.964 Total : 14574.18 56.93 0.00 0.00 8776.25 4975.88 18252.99 00:12:19.964 0 00:12:19.964 19:06:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3270503 00:12:19.964 19:06:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3270503 ']' 00:12:19.964 19:06:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3270503 00:12:19.964 19:06:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:19.964 19:06:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.964 19:06:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270503 00:12:19.964 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:19.964 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:19.964 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270503' 00:12:19.964 killing process with pid 3270503 00:12:19.964 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3270503 00:12:19.964 Received shutdown signal, test time was about 10.000000 seconds 00:12:19.964 00:12:19.964 Latency(us) 00:12:19.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.964 =================================================================================================================== 00:12:19.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.965 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3270503 00:12:19.965 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.222 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:20.480 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:20.480 19:07:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:20.740 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:20.740 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:20.740 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3268014 00:12:20.740 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3268014 00:12:20.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3268014 Killed "${NVMF_APP[@]}" "$@" 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3271977 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3271977 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3271977 ']' 00:12:20.998 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.999 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.999 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.999 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.999 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:20.999 [2024-07-15 19:07:01.225161] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:20.999 [2024-07-15 19:07:01.225253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.999 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.999 [2024-07-15 19:07:01.292636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.999 [2024-07-15 19:07:01.411348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.999 [2024-07-15 19:07:01.411416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.999 [2024-07-15 19:07:01.411440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.999 [2024-07-15 19:07:01.411454] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.999 [2024-07-15 19:07:01.411466] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.999 [2024-07-15 19:07:01.411496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.257 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.257 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:21.257 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.257 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.257 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:21.257 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.257 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:21.535 [2024-07-15 19:07:01.837186] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:21.535 [2024-07-15 19:07:01.837331] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:21.535 [2024-07-15 19:07:01.837388] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 680e0f72-d962-496f-b674-78f6b28c13f1 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=680e0f72-d962-496f-b674-78f6b28c13f1 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:21.535 19:07:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:21.855 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 680e0f72-d962-496f-b674-78f6b28c13f1 -t 2000 00:12:22.114 [ 00:12:22.114 { 00:12:22.114 "name": "680e0f72-d962-496f-b674-78f6b28c13f1", 00:12:22.114 "aliases": [ 00:12:22.114 "lvs/lvol" 00:12:22.114 ], 00:12:22.114 "product_name": "Logical Volume", 00:12:22.114 "block_size": 4096, 00:12:22.114 "num_blocks": 38912, 00:12:22.114 "uuid": "680e0f72-d962-496f-b674-78f6b28c13f1", 00:12:22.114 "assigned_rate_limits": { 00:12:22.114 "rw_ios_per_sec": 0, 00:12:22.114 "rw_mbytes_per_sec": 0, 00:12:22.114 "r_mbytes_per_sec": 0, 00:12:22.114 "w_mbytes_per_sec": 0 00:12:22.114 }, 00:12:22.114 "claimed": false, 00:12:22.114 "zoned": false, 00:12:22.114 "supported_io_types": { 00:12:22.114 "read": true, 00:12:22.114 "write": true, 00:12:22.114 "unmap": true, 00:12:22.114 "flush": false, 00:12:22.114 "reset": true, 00:12:22.114 "nvme_admin": false, 00:12:22.114 "nvme_io": false, 00:12:22.114 "nvme_io_md": false, 00:12:22.114 "write_zeroes": true, 00:12:22.114 "zcopy": false, 00:12:22.114 "get_zone_info": false, 00:12:22.114 "zone_management": false, 00:12:22.114 "zone_append": false, 00:12:22.114 "compare": false, 00:12:22.114 "compare_and_write": false, 00:12:22.114 "abort": false, 00:12:22.114 "seek_hole": true, 00:12:22.114 "seek_data": true, 00:12:22.114 "copy": false, 00:12:22.114 "nvme_iov_md": false 00:12:22.115 }, 00:12:22.115 "driver_specific": { 00:12:22.115 "lvol": { 00:12:22.115 "lvol_store_uuid": "c71d98a2-fb19-4541-9784-dcb8a09dd7ff", 00:12:22.115 "base_bdev": "aio_bdev", 00:12:22.115 "thin_provision": false, 00:12:22.115 "num_allocated_clusters": 38, 00:12:22.115 "snapshot": false, 00:12:22.115 "clone": false, 00:12:22.115 "esnap_clone": false 00:12:22.115 } 00:12:22.115 } 00:12:22.115 } 00:12:22.115 ] 00:12:22.115 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:22.115 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:22.115 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:22.397 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:22.397 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:22.397 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:22.655 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:22.655 19:07:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:22.913 [2024-07-15 19:07:03.110111] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:22.913 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:22.913 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:22.913 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:22.913 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:22.914 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:23.171 request: 00:12:23.171 { 00:12:23.171 "uuid": "c71d98a2-fb19-4541-9784-dcb8a09dd7ff", 00:12:23.171 "method": "bdev_lvol_get_lvstores", 00:12:23.171 "req_id": 1 00:12:23.171 } 00:12:23.171 Got JSON-RPC error response 00:12:23.171 response: 00:12:23.171 { 00:12:23.171 "code": -19, 00:12:23.171 "message": "No such device" 00:12:23.171 } 00:12:23.171 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:23.171 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:23.171 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:23.171 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:23.171 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:23.429 aio_bdev 00:12:23.429 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 680e0f72-d962-496f-b674-78f6b28c13f1 00:12:23.429 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=680e0f72-d962-496f-b674-78f6b28c13f1 00:12:23.429 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:23.429 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:23.429 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:23.429 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:23.429 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:23.686 19:07:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 680e0f72-d962-496f-b674-78f6b28c13f1 -t 2000 00:12:23.944 [ 00:12:23.944 { 00:12:23.944 "name": "680e0f72-d962-496f-b674-78f6b28c13f1", 00:12:23.944 "aliases": [ 00:12:23.944 "lvs/lvol" 00:12:23.944 ], 00:12:23.944 "product_name": "Logical Volume", 00:12:23.944 "block_size": 4096, 00:12:23.944 "num_blocks": 38912, 00:12:23.944 "uuid": "680e0f72-d962-496f-b674-78f6b28c13f1", 00:12:23.944 "assigned_rate_limits": { 00:12:23.944 "rw_ios_per_sec": 0, 00:12:23.944 "rw_mbytes_per_sec": 0, 00:12:23.944 "r_mbytes_per_sec": 0, 00:12:23.944 "w_mbytes_per_sec": 0 00:12:23.944 }, 00:12:23.944 "claimed": false, 00:12:23.944 "zoned": false, 00:12:23.944 "supported_io_types": { 00:12:23.944 "read": true, 00:12:23.944 "write": true, 00:12:23.944 "unmap": true, 00:12:23.944 "flush": false, 00:12:23.944 "reset": true, 00:12:23.944 "nvme_admin": false, 00:12:23.944 "nvme_io": false, 00:12:23.944 "nvme_io_md": false, 00:12:23.944 "write_zeroes": true, 00:12:23.944 "zcopy": false, 00:12:23.944 "get_zone_info": false, 00:12:23.944 "zone_management": false, 00:12:23.944 "zone_append": false, 00:12:23.944 "compare": false, 00:12:23.944 "compare_and_write": false, 00:12:23.944 "abort": false, 00:12:23.944 "seek_hole": true, 00:12:23.944 "seek_data": true, 00:12:23.944 "copy": false, 00:12:23.944 "nvme_iov_md": false 00:12:23.944 }, 00:12:23.944 "driver_specific": { 00:12:23.944 "lvol": { 00:12:23.944 "lvol_store_uuid": "c71d98a2-fb19-4541-9784-dcb8a09dd7ff", 00:12:23.944 "base_bdev": "aio_bdev", 00:12:23.944 "thin_provision": false, 00:12:23.944 "num_allocated_clusters": 38, 00:12:23.944 "snapshot": false, 00:12:23.944 "clone": false, 00:12:23.944 "esnap_clone": false 00:12:23.944 } 00:12:23.944 } 00:12:23.944 } 00:12:23.944 ] 00:12:23.944 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:23.944 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:23.944 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:24.202 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:24.202 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:24.202 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:24.459 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:24.459 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 680e0f72-d962-496f-b674-78f6b28c13f1 00:12:24.717 19:07:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c71d98a2-fb19-4541-9784-dcb8a09dd7ff 00:12:24.975 19:07:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:25.233 00:12:25.233 real 0m19.179s 00:12:25.233 user 0m48.410s 00:12:25.233 sys 0m4.742s 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:25.233 ************************************ 00:12:25.233 END TEST lvs_grow_dirty 00:12:25.233 ************************************ 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:25.233 nvmf_trace.0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.233 rmmod nvme_tcp 00:12:25.233 rmmod nvme_fabrics 00:12:25.233 rmmod nvme_keyring 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3271977 ']' 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3271977 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3271977 ']' 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3271977 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3271977 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3271977' 00:12:25.233 killing process with pid 3271977 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3271977 00:12:25.233 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3271977 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.492 19:07:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.027 19:07:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.027 00:12:28.027 real 0m42.028s 00:12:28.027 user 1m11.174s 00:12:28.027 sys 0m8.487s 00:12:28.027 19:07:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.027 19:07:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:28.027 ************************************ 00:12:28.027 END TEST nvmf_lvs_grow 00:12:28.027 ************************************ 00:12:28.027 19:07:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:28.027 19:07:07 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:28.027 19:07:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:28.027 19:07:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.027 19:07:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.027 ************************************ 00:12:28.027 START TEST nvmf_bdev_io_wait 00:12:28.027 ************************************ 00:12:28.027 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:28.027 * Looking for test storage... 00:12:28.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.027 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.027 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:28.027 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.028 19:07:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.932 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:29.932 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:29.932 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:29.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:29.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:29.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:29.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.933 19:07:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:29.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:12:29.933 00:12:29.933 --- 10.0.0.2 ping statistics --- 00:12:29.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.933 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:12:29.933 00:12:29.933 --- 10.0.0.1 ping statistics --- 00:12:29.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.933 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.933 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3274493 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3274493 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3274493 ']' 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.934 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:29.934 [2024-07-15 19:07:10.212042] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:29.934 [2024-07-15 19:07:10.212127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.934 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.934 [2024-07-15 19:07:10.272841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.193 [2024-07-15 19:07:10.380588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.193 [2024-07-15 19:07:10.380640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.193 [2024-07-15 19:07:10.380657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.193 [2024-07-15 19:07:10.380670] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.193 [2024-07-15 19:07:10.380681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.193 [2024-07-15 19:07:10.380765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.193 [2024-07-15 19:07:10.380817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.193 [2024-07-15 19:07:10.380932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.193 [2024-07-15 19:07:10.380935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 [2024-07-15 19:07:10.518261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 Malloc0 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 [2024-07-15 19:07:10.580549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3274522 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3274523 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3274526 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:30.193 { 00:12:30.193 "params": { 00:12:30.193 "name": "Nvme$subsystem", 00:12:30.193 "trtype": "$TEST_TRANSPORT", 00:12:30.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:30.193 "adrfam": "ipv4", 00:12:30.193 "trsvcid": "$NVMF_PORT", 00:12:30.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:30.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:30.193 "hdgst": ${hdgst:-false}, 00:12:30.193 "ddgst": ${ddgst:-false} 00:12:30.193 }, 00:12:30.193 "method": "bdev_nvme_attach_controller" 00:12:30.193 } 00:12:30.193 EOF 00:12:30.193 )") 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:30.193 { 00:12:30.193 "params": { 00:12:30.193 "name": "Nvme$subsystem", 00:12:30.193 "trtype": "$TEST_TRANSPORT", 00:12:30.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:30.193 "adrfam": "ipv4", 00:12:30.193 "trsvcid": "$NVMF_PORT", 00:12:30.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:30.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:30.193 "hdgst": ${hdgst:-false}, 00:12:30.193 "ddgst": ${ddgst:-false} 00:12:30.193 }, 00:12:30.193 "method": "bdev_nvme_attach_controller" 00:12:30.193 } 00:12:30.193 EOF 00:12:30.193 )") 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3274528 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:30.193 { 00:12:30.193 "params": { 00:12:30.193 "name": "Nvme$subsystem", 00:12:30.193 "trtype": "$TEST_TRANSPORT", 00:12:30.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:30.193 "adrfam": "ipv4", 00:12:30.193 "trsvcid": "$NVMF_PORT", 00:12:30.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:30.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:30.193 "hdgst": ${hdgst:-false}, 00:12:30.193 "ddgst": ${ddgst:-false} 00:12:30.193 }, 00:12:30.193 "method": "bdev_nvme_attach_controller" 00:12:30.193 } 00:12:30.193 EOF 00:12:30.193 )") 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:30.193 { 00:12:30.193 "params": { 00:12:30.193 "name": "Nvme$subsystem", 00:12:30.193 "trtype": "$TEST_TRANSPORT", 00:12:30.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:30.193 "adrfam": "ipv4", 00:12:30.193 "trsvcid": "$NVMF_PORT", 00:12:30.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:30.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:30.193 "hdgst": ${hdgst:-false}, 00:12:30.193 "ddgst": ${ddgst:-false} 00:12:30.193 }, 00:12:30.193 "method": "bdev_nvme_attach_controller" 00:12:30.193 } 00:12:30.193 EOF 00:12:30.193 )") 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3274522 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:30.193 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:30.193 "params": { 00:12:30.193 "name": "Nvme1", 00:12:30.193 "trtype": "tcp", 00:12:30.193 "traddr": "10.0.0.2", 00:12:30.193 "adrfam": "ipv4", 00:12:30.193 "trsvcid": "4420", 00:12:30.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.193 "hdgst": false, 00:12:30.193 "ddgst": false 00:12:30.193 }, 00:12:30.193 "method": "bdev_nvme_attach_controller" 00:12:30.193 }' 00:12:30.194 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:30.194 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:30.194 "params": { 00:12:30.194 "name": "Nvme1", 00:12:30.194 "trtype": "tcp", 00:12:30.194 "traddr": "10.0.0.2", 00:12:30.194 "adrfam": "ipv4", 00:12:30.194 "trsvcid": "4420", 00:12:30.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.194 "hdgst": false, 00:12:30.194 "ddgst": false 00:12:30.194 }, 00:12:30.194 "method": "bdev_nvme_attach_controller" 00:12:30.194 }' 00:12:30.194 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:30.194 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:30.194 "params": { 00:12:30.194 "name": "Nvme1", 00:12:30.194 "trtype": "tcp", 00:12:30.194 "traddr": "10.0.0.2", 00:12:30.194 "adrfam": "ipv4", 00:12:30.194 "trsvcid": "4420", 00:12:30.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.194 "hdgst": false, 00:12:30.194 "ddgst": false 00:12:30.194 }, 00:12:30.194 "method": "bdev_nvme_attach_controller" 00:12:30.194 }' 00:12:30.194 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:30.194 19:07:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:30.194 "params": { 00:12:30.194 "name": "Nvme1", 00:12:30.194 "trtype": "tcp", 00:12:30.194 "traddr": "10.0.0.2", 00:12:30.194 "adrfam": "ipv4", 00:12:30.194 "trsvcid": "4420", 00:12:30.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.194 "hdgst": false, 00:12:30.194 "ddgst": false 00:12:30.194 }, 00:12:30.194 "method": "bdev_nvme_attach_controller" 00:12:30.194 }' 00:12:30.452 [2024-07-15 19:07:10.630146] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:30.452 [2024-07-15 19:07:10.630261] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:30.452 [2024-07-15 19:07:10.630329] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:30.452 [2024-07-15 19:07:10.630329] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:30.452 [2024-07-15 19:07:10.630330] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:30.452 [2024-07-15 19:07:10.630406] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 19:07:10.630406] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 19:07:10.630407] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:30.452 --proc-type=auto ] 00:12:30.452 --proc-type=auto ] 00:12:30.452 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.452 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.452 [2024-07-15 19:07:10.805834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.452 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.710 [2024-07-15 19:07:10.905264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:30.710 [2024-07-15 19:07:10.910181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.710 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.710 [2024-07-15 19:07:11.006751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:30.710 [2024-07-15 19:07:11.009726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.710 [2024-07-15 19:07:11.105864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:30.710 [2024-07-15 19:07:11.110093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.970 [2024-07-15 19:07:11.203804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:30.970 Running I/O for 1 seconds... 00:12:30.970 Running I/O for 1 seconds... 00:12:31.230 Running I/O for 1 seconds... 00:12:31.230 Running I/O for 1 seconds... 00:12:32.167 00:12:32.167 Latency(us) 00:12:32.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.167 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:32.167 Nvme1n1 : 1.00 200855.32 784.59 0.00 0.00 634.99 256.38 916.29 00:12:32.167 =================================================================================================================== 00:12:32.167 Total : 200855.32 784.59 0.00 0.00 634.99 256.38 916.29 00:12:32.167 00:12:32.167 Latency(us) 00:12:32.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.167 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:32.167 Nvme1n1 : 1.02 7110.37 27.77 0.00 0.00 17812.58 7427.41 25826.04 00:12:32.167 =================================================================================================================== 00:12:32.167 Total : 7110.37 27.77 0.00 0.00 17812.58 7427.41 25826.04 00:12:32.167 00:12:32.167 Latency(us) 00:12:32.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.167 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:32.167 Nvme1n1 : 1.01 9469.96 36.99 0.00 0.00 13448.72 9126.49 23398.78 00:12:32.167 =================================================================================================================== 00:12:32.167 Total : 9469.96 36.99 0.00 0.00 13448.72 9126.49 23398.78 00:12:32.167 00:12:32.167 Latency(us) 00:12:32.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.167 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:32.167 Nvme1n1 : 1.00 7325.56 28.62 0.00 0.00 17424.41 4684.61 41166.32 00:12:32.167 =================================================================================================================== 00:12:32.167 Total : 7325.56 28.62 0.00 0.00 17424.41 4684.61 41166.32 00:12:32.426 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3274523 00:12:32.426 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3274526 00:12:32.426 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3274528 00:12:32.426 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.426 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.426 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.685 rmmod nvme_tcp 00:12:32.685 rmmod nvme_fabrics 00:12:32.685 rmmod nvme_keyring 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3274493 ']' 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3274493 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3274493 ']' 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3274493 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3274493 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3274493' 00:12:32.685 killing process with pid 3274493 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3274493 00:12:32.685 19:07:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3274493 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.942 19:07:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.844 19:07:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:34.844 00:12:34.844 real 0m7.244s 00:12:34.844 user 0m16.229s 00:12:34.844 sys 0m3.710s 00:12:34.844 19:07:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.844 19:07:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:34.844 ************************************ 00:12:34.844 END TEST nvmf_bdev_io_wait 00:12:34.844 ************************************ 00:12:35.102 19:07:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:35.102 19:07:15 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:35.102 19:07:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.102 19:07:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.102 19:07:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.102 ************************************ 00:12:35.102 START TEST nvmf_queue_depth 00:12:35.102 ************************************ 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:35.102 * Looking for test storage... 00:12:35.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.102 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.103 19:07:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.040 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:37.041 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:37.041 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:37.041 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:37.041 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:12:37.041 00:12:37.041 --- 10.0.0.2 ping statistics --- 00:12:37.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.041 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:12:37.041 00:12:37.041 --- 10.0.0.1 ping statistics --- 00:12:37.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.041 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3276750 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3276750 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3276750 ']' 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.041 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.301 [2024-07-15 19:07:17.484958] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:37.301 [2024-07-15 19:07:17.485048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.301 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.301 [2024-07-15 19:07:17.554524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.301 [2024-07-15 19:07:17.671810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.301 [2024-07-15 19:07:17.671874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.301 [2024-07-15 19:07:17.671910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.301 [2024-07-15 19:07:17.671924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.301 [2024-07-15 19:07:17.671937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.301 [2024-07-15 19:07:17.671975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.560 [2024-07-15 19:07:17.824965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.560 Malloc0 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.560 [2024-07-15 19:07:17.885669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3276886 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3276886 /var/tmp/bdevperf.sock 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3276886 ']' 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:37.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.560 19:07:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.560 [2024-07-15 19:07:17.932442] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:37.560 [2024-07-15 19:07:17.932505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276886 ] 00:12:37.560 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.821 [2024-07-15 19:07:17.994828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.821 [2024-07-15 19:07:18.111304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.821 19:07:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.821 19:07:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:37.821 19:07:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:37.821 19:07:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.821 19:07:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:38.081 NVMe0n1 00:12:38.081 19:07:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.081 19:07:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:38.081 Running I/O for 10 seconds... 00:12:50.305 00:12:50.305 Latency(us) 00:12:50.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.305 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:50.305 Verification LBA range: start 0x0 length 0x4000 00:12:50.305 NVMe0n1 : 10.09 7754.88 30.29 0.00 0.00 131345.74 21845.33 78060.66 00:12:50.305 =================================================================================================================== 00:12:50.305 Total : 7754.88 30.29 0.00 0.00 131345.74 21845.33 78060.66 00:12:50.305 0 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3276886 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3276886 ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3276886 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276886 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276886' 00:12:50.305 killing process with pid 3276886 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3276886 00:12:50.305 Received shutdown signal, test time was about 10.000000 seconds 00:12:50.305 00:12:50.305 Latency(us) 00:12:50.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.305 =================================================================================================================== 00:12:50.305 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3276886 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.305 rmmod nvme_tcp 00:12:50.305 rmmod nvme_fabrics 00:12:50.305 rmmod nvme_keyring 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3276750 ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3276750 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3276750 ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3276750 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276750 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276750' 00:12:50.305 killing process with pid 3276750 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3276750 00:12:50.305 19:07:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3276750 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.305 19:07:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.872 19:07:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:50.872 00:12:50.872 real 0m15.948s 00:12:50.872 user 0m22.459s 00:12:50.872 sys 0m3.033s 00:12:50.872 19:07:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.872 19:07:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:50.872 ************************************ 00:12:50.872 END TEST nvmf_queue_depth 00:12:50.872 ************************************ 00:12:50.872 19:07:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:50.872 19:07:31 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:50.872 19:07:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:50.872 19:07:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.872 19:07:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:50.872 ************************************ 00:12:50.872 START TEST nvmf_target_multipath 00:12:50.872 ************************************ 00:12:50.872 19:07:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:51.130 * Looking for test storage... 00:12:51.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.130 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.131 19:07:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:53.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:53.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:53.031 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.031 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:53.032 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:53.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:12:53.032 00:12:53.032 --- 10.0.0.2 ping statistics --- 00:12:53.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.032 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:12:53.032 00:12:53.032 --- 10.0.0.1 ping statistics --- 00:12:53.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.032 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:53.032 only one NIC for nvmf test 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.032 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:53.032 rmmod nvme_tcp 00:12:53.032 rmmod nvme_fabrics 00:12:53.032 rmmod nvme_keyring 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.291 19:07:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.198 00:12:55.198 real 0m4.246s 00:12:55.198 user 0m0.787s 00:12:55.198 sys 0m1.449s 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.198 19:07:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:55.198 ************************************ 00:12:55.198 END TEST nvmf_target_multipath 00:12:55.198 ************************************ 00:12:55.198 19:07:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:55.198 19:07:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:55.198 19:07:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:55.198 19:07:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.198 19:07:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.198 ************************************ 00:12:55.198 START TEST nvmf_zcopy 00:12:55.198 ************************************ 00:12:55.198 19:07:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:55.457 * Looking for test storage... 00:12:55.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.457 19:07:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.458 19:07:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.364 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.364 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.365 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.365 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.365 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:12:57.365 00:12:57.365 --- 10.0.0.2 ping statistics --- 00:12:57.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.365 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:12:57.365 00:12:57.365 --- 10.0.0.1 ping statistics --- 00:12:57.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.365 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3281944 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3281944 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3281944 ']' 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.365 19:07:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.365 [2024-07-15 19:07:37.738796] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:57.365 [2024-07-15 19:07:37.738897] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.365 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.624 [2024-07-15 19:07:37.805741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.624 [2024-07-15 19:07:37.912827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.624 [2024-07-15 19:07:37.912886] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.624 [2024-07-15 19:07:37.912917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.624 [2024-07-15 19:07:37.912928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.624 [2024-07-15 19:07:37.912939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.624 [2024-07-15 19:07:37.912965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.624 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.624 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:57.624 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.624 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.624 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.882 [2024-07-15 19:07:38.062808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.882 [2024-07-15 19:07:38.079027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.882 malloc0 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:57.882 { 00:12:57.882 "params": { 00:12:57.882 "name": "Nvme$subsystem", 00:12:57.882 "trtype": "$TEST_TRANSPORT", 00:12:57.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:57.882 "adrfam": "ipv4", 00:12:57.882 "trsvcid": "$NVMF_PORT", 00:12:57.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:57.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:57.882 "hdgst": ${hdgst:-false}, 00:12:57.882 "ddgst": ${ddgst:-false} 00:12:57.882 }, 00:12:57.882 "method": "bdev_nvme_attach_controller" 00:12:57.882 } 00:12:57.882 EOF 00:12:57.882 )") 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:57.882 19:07:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:57.882 "params": { 00:12:57.882 "name": "Nvme1", 00:12:57.882 "trtype": "tcp", 00:12:57.882 "traddr": "10.0.0.2", 00:12:57.882 "adrfam": "ipv4", 00:12:57.882 "trsvcid": "4420", 00:12:57.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:57.882 "hdgst": false, 00:12:57.882 "ddgst": false 00:12:57.882 }, 00:12:57.882 "method": "bdev_nvme_attach_controller" 00:12:57.882 }' 00:12:57.882 [2024-07-15 19:07:38.161423] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:12:57.882 [2024-07-15 19:07:38.161511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282019 ] 00:12:57.882 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.882 [2024-07-15 19:07:38.230587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.140 [2024-07-15 19:07:38.352096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.397 Running I/O for 10 seconds... 00:13:08.413 00:13:08.413 Latency(us) 00:13:08.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.413 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:08.413 Verification LBA range: start 0x0 length 0x1000 00:13:08.413 Nvme1n1 : 10.02 5754.12 44.95 0.00 0.00 22182.83 2609.30 33593.27 00:13:08.413 =================================================================================================================== 00:13:08.413 Total : 5754.12 44.95 0.00 0.00 22182.83 2609.30 33593.27 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3283282 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:08.672 { 00:13:08.672 "params": { 00:13:08.672 "name": "Nvme$subsystem", 00:13:08.672 "trtype": "$TEST_TRANSPORT", 00:13:08.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:08.672 "adrfam": "ipv4", 00:13:08.672 "trsvcid": "$NVMF_PORT", 00:13:08.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:08.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:08.672 "hdgst": ${hdgst:-false}, 00:13:08.672 "ddgst": ${ddgst:-false} 00:13:08.672 }, 00:13:08.672 "method": "bdev_nvme_attach_controller" 00:13:08.672 } 00:13:08.672 EOF 00:13:08.672 )") 00:13:08.672 [2024-07-15 19:07:48.905402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.905452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:08.672 19:07:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:08.672 "params": { 00:13:08.672 "name": "Nvme1", 00:13:08.672 "trtype": "tcp", 00:13:08.672 "traddr": "10.0.0.2", 00:13:08.672 "adrfam": "ipv4", 00:13:08.672 "trsvcid": "4420", 00:13:08.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.672 "hdgst": false, 00:13:08.672 "ddgst": false 00:13:08.672 }, 00:13:08.672 "method": "bdev_nvme_attach_controller" 00:13:08.672 }' 00:13:08.672 [2024-07-15 19:07:48.913350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.913378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.921361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.921384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.929372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.929392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.937396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.937415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.944691] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:08.672 [2024-07-15 19:07:48.944762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283282 ] 00:13:08.672 [2024-07-15 19:07:48.945419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.945439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.953438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.953472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.961460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.961480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.969482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.969502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.672 [2024-07-15 19:07:48.977521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.977546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.985542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.985567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:48.993561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:48.993586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.001586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.001610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.007809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.672 [2024-07-15 19:07:49.009607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.009633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.017669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.017711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.025666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.025697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.033675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.033700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.041698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.041723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.049757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.049784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.057744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.057769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.065766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.065791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.073818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.073855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.081842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.081889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.089834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.089859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.672 [2024-07-15 19:07:49.097854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.672 [2024-07-15 19:07:49.097900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.105875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.105921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.113903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.113943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.121939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.121960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.129239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.931 [2024-07-15 19:07:49.129959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.129980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.137971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.137992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.146020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.146054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.154059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.154097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.162084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.162120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.170111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.170164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.178129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.178181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.186164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.186203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.194181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.194234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.202176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.202199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.210245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.210284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.218263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.218303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.226271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.226302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.234261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.234285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.242296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.242335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.250331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.250362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.258349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.258376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.266373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.266399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.274391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.274419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.282412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.282438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.290433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.290458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.298456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.298481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.306478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.306502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.314505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.314532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.322529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.322556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.330552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.330579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.338571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.338596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.346617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.931 [2024-07-15 19:07:49.346647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.931 [2024-07-15 19:07:49.354634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.932 [2024-07-15 19:07:49.354661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.932 Running I/O for 5 seconds... 00:13:09.190 [2024-07-15 19:07:49.362656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.362681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.377856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.377898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.389199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.389231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.402714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.402746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.413689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.413726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.425522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.425553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.437029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.437057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.448395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.448425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.459663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.459694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.473002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.473031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.483581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.483613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.494840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.494872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.506203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.506234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.517329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.517360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.528777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.528807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.540176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.540206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.551709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.551740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.563050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.563077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.574457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.574487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.585573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.585603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.597020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.597048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.608023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.608051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.190 [2024-07-15 19:07:49.619480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.190 [2024-07-15 19:07:49.619510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.630651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.630689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.641739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.641769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.653139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.653166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.666015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.666042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.676020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.676047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.687858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.687897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.698693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.698724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.449 [2024-07-15 19:07:49.709789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.449 [2024-07-15 19:07:49.709819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.721240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.721271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.732679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.732709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.743939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.743966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.755364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.755394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.766991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.767018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.778234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.778264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.789171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.789216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.800398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.800428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.811400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.811430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.823080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.823108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.834373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.834404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.847587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.847618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.857394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.857425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.868711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.868741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.450 [2024-07-15 19:07:49.880010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.450 [2024-07-15 19:07:49.880037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.891312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.891343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.902281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.902311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.913546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.913576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.925280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.925310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.936895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.936940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.948581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.948611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.960226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.960256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.971357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.971387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.982640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.982670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:49.994045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:49.994072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.005132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.005172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.016953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.016987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.028498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.028529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.039941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.039969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.051527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.051558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.062931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.062959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.074342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.074373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.085774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.085804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.097219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.097247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.108300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.108330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.119527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.119557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.708 [2024-07-15 19:07:50.131120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.708 [2024-07-15 19:07:50.131147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.142625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.142655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.154049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.154076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.165489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.165518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.176620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.176649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.188134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.188176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.199538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.199569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.211048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.211075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.222949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.222977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.233885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.233928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.245362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.245393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.257043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.257071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.268350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.268381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.279516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.279546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.290687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.290717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.303323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.303353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.312998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.313026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.324952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.324980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.337800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.337829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.348064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.348091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.359786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.966 [2024-07-15 19:07:50.359816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.966 [2024-07-15 19:07:50.371052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.967 [2024-07-15 19:07:50.371078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.967 [2024-07-15 19:07:50.382357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.967 [2024-07-15 19:07:50.382387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.967 [2024-07-15 19:07:50.393659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.967 [2024-07-15 19:07:50.393690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.405116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.405143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.416698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.416728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.428317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.428348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.439537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.439568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.450752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.450783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.462257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.462287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.473400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.473430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.486687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.486718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.497325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.497355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.508680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.508710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.224 [2024-07-15 19:07:50.519801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.224 [2024-07-15 19:07:50.519830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.531287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.531317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.544335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.544366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.554861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.554906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.566598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.566629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.578380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.578411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.589631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.589661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.601213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.601243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.612725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.612756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.624284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.624315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.635414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.635445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.225 [2024-07-15 19:07:50.646372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.225 [2024-07-15 19:07:50.646403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.483 [2024-07-15 19:07:50.659442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.483 [2024-07-15 19:07:50.659472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.483 [2024-07-15 19:07:50.669777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.483 [2024-07-15 19:07:50.669807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.483 [2024-07-15 19:07:50.681567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.483 [2024-07-15 19:07:50.681598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.483 [2024-07-15 19:07:50.693351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.483 [2024-07-15 19:07:50.693383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.483 [2024-07-15 19:07:50.704872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.483 [2024-07-15 19:07:50.704920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.716096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.716123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.727646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.727677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.739068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.739096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.750575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.750606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.762060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.762087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.773684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.773714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.785240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.785270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.796769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.796800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.808259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.808289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.819894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.819938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.831182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.831226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.842468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.842498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.854126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.854153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.865944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.865971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.877621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.877651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.889054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.889081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.900790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.900819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.484 [2024-07-15 19:07:50.912351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.484 [2024-07-15 19:07:50.912381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:50.925443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:50.925484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:50.935543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:50.935573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:50.947812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:50.947842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:50.958947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:50.958975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:50.970034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:50.970061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:50.981308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:50.981339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:50.992264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:50.992293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.003233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.003263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.016563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.016593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.026964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.026992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.038028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.038056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.049235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.049266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.060495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.060522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.072672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.072699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.083223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.083254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.094439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.094469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.105844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.105874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.117320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.117350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.128612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.128641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.139892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.139945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.151124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.151151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.742 [2024-07-15 19:07:51.162463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.742 [2024-07-15 19:07:51.162493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.999 [2024-07-15 19:07:51.173870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:10.999 [2024-07-15 19:07:51.173910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:10.999 [2024-07-15 19:07:51.185456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.185486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.196319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.196349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.207605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.207635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.218702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.218732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.232205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.232235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.243118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.243146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.254301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.254331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.267273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.267303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.277402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.277432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.289086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.289114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.300315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.300345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.313365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.313392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.323820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.323850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.335048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.335075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.346143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.346170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.357337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.357375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.368765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.368795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.379922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.379964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.390535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.390564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.401624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.401655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.414543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.414573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.000 [2024-07-15 19:07:51.424895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.000 [2024-07-15 19:07:51.424941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.436938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.436965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.448045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.448073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.458992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.459020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.470011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.470038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.480852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.480890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.492099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.492127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.505103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.505131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.515028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.515055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.526630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.526660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.538140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.538168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.551136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.551162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.561519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.561549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.573020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.573047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.586721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.586751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.597602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.597632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.608979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.609006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.620531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.620560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.631925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.631953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.643070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.643097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.654476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.654505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.665810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.665840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.257 [2024-07-15 19:07:51.677171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.257 [2024-07-15 19:07:51.677198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.688482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.688513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.699567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.699597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.711133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.711161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.722441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.722471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.733747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.733787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.745431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.745462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.757181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.757226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.768943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.768974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.515 [2024-07-15 19:07:51.782133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.515 [2024-07-15 19:07:51.782163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.792648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.792679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.804631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.804661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.816155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.816182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.827937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.827965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.841307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.841338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.851557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.851587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.863116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.863144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.874498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.874528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.887428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.887458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.897345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.897376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.908999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.909026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.920156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.920199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.931382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.931412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.516 [2024-07-15 19:07:51.942623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.516 [2024-07-15 19:07:51.942653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:51.953680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:51.953709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:51.965104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:51.965131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:51.976473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:51.976502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:51.987957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:51.987985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:51.998922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:51.998949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.010199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.010229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.021888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.021931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.032950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.032977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.044230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.044260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.055408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.055438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.068367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.068398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.078936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.078964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.091020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.091048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.102311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.102341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.115730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.115760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.126015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.126042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.137979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.138006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.149210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.149240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.160108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.160135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.171139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.171183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.182604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.182633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:11.774 [2024-07-15 19:07:52.193971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:11.774 [2024-07-15 19:07:52.193998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.205766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.205796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.217409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.217441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.228636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.228666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.239790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.239820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.250836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.250866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.263848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.263886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.274644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.274674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.285767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.285797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.296854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.296893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.308123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.308150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.319567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.319596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.330577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.330607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.341630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.341659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.353268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.353298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.364632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.364661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.376001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.376028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.387212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.387255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.398212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.398242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.409167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.409211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.420307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.420338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.431419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.431464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.442457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.442486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.031 [2024-07-15 19:07:52.455468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.031 [2024-07-15 19:07:52.455498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.465996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.466023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.478180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.478224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.489655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.489685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.502534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.502564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.513209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.513240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.524500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.524530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.536021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.536049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.547771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.547802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.559316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.559347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.570726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.570756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.581978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.582005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.593260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.593290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.606006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.606033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.615972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.615999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.626420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.626447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.638922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.638949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.648905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.648939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.659012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.659039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.671644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.671671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.680905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.680932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.691486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.691513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.702034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.702061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.289 [2024-07-15 19:07:52.712571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.289 [2024-07-15 19:07:52.712598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.722625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.722653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.732953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.732980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.743258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.743285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.753381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.753409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.763848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.763875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.773806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.773833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.783955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.783982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.794261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.794289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.804576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.804603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.815004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.815031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.827553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.827580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.837113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.837140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.847912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.847950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.859925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.859952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.869291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.869318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.880188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.880215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.892286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.892313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.902024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.902052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.912737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.912765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.923087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.923114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.935523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.935551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.945428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.945456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.956015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.956042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.966166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.966193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.548 [2024-07-15 19:07:52.976832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.548 [2024-07-15 19:07:52.976860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.805 [2024-07-15 19:07:52.987577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.805 [2024-07-15 19:07:52.987605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:52.999918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:52.999946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.009563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.009590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.020284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.020312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.030823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.030850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.041432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.041459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.054019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.054055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.064695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.064725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.076489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.076519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.087928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.087955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.099652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.099683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.111118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.111145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.122687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.122716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.133938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.133965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.145626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.145657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.157056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.157083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.170596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.170626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.181396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.181425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.192752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.192781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.203848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.203886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.215396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.215427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.806 [2024-07-15 19:07:53.226393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.806 [2024-07-15 19:07:53.226423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.237564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.237594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.249304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.249334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.260484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.260513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.271544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.271574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.283111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.283138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.294082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.294109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.305513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.305542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.317058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.317085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.328385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.328416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.339735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.339765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.351034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.351062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.362404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.362433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.373804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.373834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.384977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.385004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.396304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.396334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.407577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.407606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.419053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.419080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.430413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.430443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.441430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.441460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.453037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.453064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.464612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.464642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.476073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.476100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.064 [2024-07-15 19:07:53.487517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.064 [2024-07-15 19:07:53.487547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.498682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.498712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.512035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.512062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.522282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.522312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.534097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.534124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.545249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.545280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.556979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.557006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.568072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.568099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.579388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.579419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.590704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.590734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.602182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.602212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.613561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.613591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.625034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.625062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.636122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.636149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.649149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.649176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.659266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.659296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.671371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.671401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.682972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.682999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.694398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.694428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.705383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.705413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.716694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.716723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.729987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.730014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.740200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.740230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.322 [2024-07-15 19:07:53.752018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.322 [2024-07-15 19:07:53.752045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.763102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.763131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.774561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.774591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.785713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.785743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.797083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.797110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.808707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.808737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.820231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.820261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.831384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.831414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.846870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.846927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.857070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.857097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.868810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.868840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.879872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.879925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.893353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.893383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.903704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.903734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.914804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.914834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.926189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.926219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.937416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.937447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.950672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.950702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.961354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.961384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.972519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.972550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.983934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.983961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:53.995279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:53.995309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.580 [2024-07-15 19:07:54.007112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.580 [2024-07-15 19:07:54.007139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.018457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.018496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.029234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.029262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.039714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.039742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.051170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.051202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.062553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.062581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.073994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.074022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.085475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.085515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.096832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.096863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.108594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.108624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.120656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.837 [2024-07-15 19:07:54.120686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.837 [2024-07-15 19:07:54.132718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.132756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.144231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.144262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.155960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.155986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.167381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.167411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.180560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.180589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.190887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.190931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.203097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.203124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.214472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.214501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.225831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.225860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.237412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.237443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.248829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.248858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.838 [2024-07-15 19:07:54.259984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.838 [2024-07-15 19:07:54.260011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.271281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.271312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.282787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.282816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.293977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.294004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.306873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.306910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.317115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.317142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.328756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.328786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.340147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.340196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.351762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.351800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.363755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.363785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.375211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.375242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.383022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.383046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 00:13:14.096 Latency(us) 00:13:14.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.096 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:14.096 Nvme1n1 : 5.01 11308.90 88.35 0.00 0.00 11302.93 4927.34 24272.59 00:13:14.096 =================================================================================================================== 00:13:14.096 Total : 11308.90 88.35 0.00 0.00 11302.93 4927.34 24272.59 00:13:14.096 [2024-07-15 19:07:54.391038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.391062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.399055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.399078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.407097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.407128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.415163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.415210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.423180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.423227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.431199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.431245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.439223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.096 [2024-07-15 19:07:54.439270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.096 [2024-07-15 19:07:54.447246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.447294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.455271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.455319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.463286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.463333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.471310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.471374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.479331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.479382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.487353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.487416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.495370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.495417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.503393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.503439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.511424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.511472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.519397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.519426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.097 [2024-07-15 19:07:54.527416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.097 [2024-07-15 19:07:54.527440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.535437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.535462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.543459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.543484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.551489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.551516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.559551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.559598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.567570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.567618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.575552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.575577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.583568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.583593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.591593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.591617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.599614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.599639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.607639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.607664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.615714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.615762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.623722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.623769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.631703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.631727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.639723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.639757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 [2024-07-15 19:07:54.647746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.355 [2024-07-15 19:07:54.647770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3283282) - No such process 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3283282 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.355 delay0 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.355 19:07:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:14.355 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.613 [2024-07-15 19:07:54.803055] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:21.165 Initializing NVMe Controllers 00:13:21.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:21.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:21.165 Initialization complete. Launching workers. 00:13:21.165 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:13:21.165 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 395, failed to submit 33 00:13:21.165 success 175, unsuccess 220, failed 0 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.165 rmmod nvme_tcp 00:13:21.165 rmmod nvme_fabrics 00:13:21.165 rmmod nvme_keyring 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3281944 ']' 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3281944 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3281944 ']' 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3281944 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3281944 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3281944' 00:13:21.165 killing process with pid 3281944 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3281944 00:13:21.165 19:08:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3281944 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.165 19:08:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.065 19:08:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.065 00:13:23.065 real 0m27.740s 00:13:23.065 user 0m41.160s 00:13:23.065 sys 0m8.120s 00:13:23.065 19:08:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.066 19:08:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:23.066 ************************************ 00:13:23.066 END TEST nvmf_zcopy 00:13:23.066 ************************************ 00:13:23.066 19:08:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:23.066 19:08:03 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:23.066 19:08:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:23.066 19:08:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.066 19:08:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.066 ************************************ 00:13:23.066 START TEST nvmf_nmic 00:13:23.066 ************************************ 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:23.066 * Looking for test storage... 00:13:23.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.066 19:08:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:25.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:25.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:25.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:25.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.029 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.030 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:13:25.288 00:13:25.288 --- 10.0.0.2 ping statistics --- 00:13:25.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.288 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:25.288 00:13:25.288 --- 10.0.0.1 ping statistics --- 00:13:25.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.288 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3286658 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3286658 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3286658 ']' 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.288 19:08:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:25.288 [2024-07-15 19:08:05.655353] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:25.288 [2024-07-15 19:08:05.655440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.288 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.546 [2024-07-15 19:08:05.728417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.546 [2024-07-15 19:08:05.850161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.546 [2024-07-15 19:08:05.850226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.546 [2024-07-15 19:08:05.850242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.546 [2024-07-15 19:08:05.850255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.546 [2024-07-15 19:08:05.850267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.546 [2024-07-15 19:08:05.850353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.546 [2024-07-15 19:08:05.850393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.546 [2024-07-15 19:08:05.850447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.546 [2024-07-15 19:08:05.850450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 [2024-07-15 19:08:06.629982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 Malloc0 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 [2024-07-15 19:08:06.683051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:26.479 test case1: single bdev can't be used in multiple subsystems 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 [2024-07-15 19:08:06.706909] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:26.479 [2024-07-15 19:08:06.706961] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:26.479 [2024-07-15 19:08:06.706977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.479 request: 00:13:26.479 { 00:13:26.479 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:26.479 "namespace": { 00:13:26.479 "bdev_name": "Malloc0", 00:13:26.479 "no_auto_visible": false 00:13:26.479 }, 00:13:26.479 "method": "nvmf_subsystem_add_ns", 00:13:26.479 "req_id": 1 00:13:26.479 } 00:13:26.479 Got JSON-RPC error response 00:13:26.479 response: 00:13:26.479 { 00:13:26.479 "code": -32602, 00:13:26.479 "message": "Invalid parameters" 00:13:26.479 } 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:26.479 Adding namespace failed - expected result. 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:26.479 test case2: host connect to nvmf target in multiple paths 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:26.479 [2024-07-15 19:08:06.715041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.479 19:08:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.045 19:08:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:27.611 19:08:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.611 19:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.611 19:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.611 19:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:27.611 19:08:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:30.138 19:08:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:30.138 19:08:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:30.138 19:08:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.138 19:08:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:30.138 19:08:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.138 19:08:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:30.138 19:08:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:30.138 [global] 00:13:30.138 thread=1 00:13:30.138 invalidate=1 00:13:30.138 rw=write 00:13:30.138 time_based=1 00:13:30.138 runtime=1 00:13:30.138 ioengine=libaio 00:13:30.138 direct=1 00:13:30.138 bs=4096 00:13:30.138 iodepth=1 00:13:30.138 norandommap=0 00:13:30.138 numjobs=1 00:13:30.138 00:13:30.138 verify_dump=1 00:13:30.138 verify_backlog=512 00:13:30.138 verify_state_save=0 00:13:30.138 do_verify=1 00:13:30.138 verify=crc32c-intel 00:13:30.138 [job0] 00:13:30.138 filename=/dev/nvme0n1 00:13:30.138 Could not set queue depth (nvme0n1) 00:13:30.138 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:30.138 fio-3.35 00:13:30.138 Starting 1 thread 00:13:31.068 00:13:31.068 job0: (groupid=0, jobs=1): err= 0: pid=3287306: Mon Jul 15 19:08:11 2024 00:13:31.068 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:31.068 slat (nsec): min=5973, max=52562, avg=15055.76, stdev=7817.78 00:13:31.068 clat (usec): min=285, max=634, avg=370.13, stdev=44.21 00:13:31.068 lat (usec): min=292, max=656, avg=385.19, stdev=46.93 00:13:31.068 clat percentiles (usec): 00:13:31.068 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:13:31.068 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 388], 00:13:31.068 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 420], 95.00th=[ 433], 00:13:31.068 | 99.00th=[ 478], 99.50th=[ 537], 99.90th=[ 603], 99.95th=[ 635], 00:13:31.068 | 99.99th=[ 635] 00:13:31.068 write: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec); 0 zone resets 00:13:31.068 slat (usec): min=7, max=30681, avg=31.98, stdev=760.13 00:13:31.068 clat (usec): min=181, max=345, avg=210.91, stdev=22.95 00:13:31.068 lat (usec): min=191, max=31012, avg=242.88, stdev=763.58 00:13:31.068 clat percentiles (usec): 00:13:31.068 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:13:31.068 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:13:31.068 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 269], 00:13:31.068 | 99.00th=[ 310], 99.50th=[ 314], 99.90th=[ 338], 99.95th=[ 347], 00:13:31.068 | 99.99th=[ 347] 00:13:31.068 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:31.068 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:31.068 lat (usec) : 250=48.20%, 500=51.39%, 750=0.41% 00:13:31.068 cpu : usr=3.20%, sys=3.80%, ctx=3168, majf=0, minf=2 00:13:31.068 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.068 issued rwts: total=1536,1628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.068 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.068 00:13:31.068 Run status group 0 (all jobs): 00:13:31.068 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:13:31.068 WRITE: bw=6505KiB/s (6662kB/s), 6505KiB/s-6505KiB/s (6662kB/s-6662kB/s), io=6512KiB (6668kB), run=1001-1001msec 00:13:31.068 00:13:31.068 Disk stats (read/write): 00:13:31.068 nvme0n1: ios=1362/1536, merge=0/0, ticks=1426/308, in_queue=1734, util=98.80% 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:31.068 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:31.068 rmmod nvme_tcp 00:13:31.068 rmmod nvme_fabrics 00:13:31.326 rmmod nvme_keyring 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3286658 ']' 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3286658 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3286658 ']' 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3286658 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3286658 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3286658' 00:13:31.326 killing process with pid 3286658 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3286658 00:13:31.326 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3286658 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.584 19:08:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.487 19:08:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:33.487 00:13:33.487 real 0m10.540s 00:13:33.487 user 0m24.827s 00:13:33.487 sys 0m2.494s 00:13:33.487 19:08:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.745 19:08:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:33.745 ************************************ 00:13:33.745 END TEST nvmf_nmic 00:13:33.745 ************************************ 00:13:33.745 19:08:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:33.745 19:08:13 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:33.745 19:08:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:33.745 19:08:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.745 19:08:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.745 ************************************ 00:13:33.745 START TEST nvmf_fio_target 00:13:33.745 ************************************ 00:13:33.745 19:08:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:33.745 * Looking for test storage... 00:13:33.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.745 19:08:14 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:33.746 19:08:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:35.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:35.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:35.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:35.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.647 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:35.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:13:35.906 00:13:35.906 --- 10.0.0.2 ping statistics --- 00:13:35.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.906 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:13:35.906 00:13:35.906 --- 10.0.0.1 ping statistics --- 00:13:35.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.906 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3289378 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3289378 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3289378 ']' 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.906 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.907 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.907 19:08:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.907 [2024-07-15 19:08:16.242352] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:35.907 [2024-07-15 19:08:16.242443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.907 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.907 [2024-07-15 19:08:16.313621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.166 [2024-07-15 19:08:16.434713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.166 [2024-07-15 19:08:16.434780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.166 [2024-07-15 19:08:16.434795] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.166 [2024-07-15 19:08:16.434809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.166 [2024-07-15 19:08:16.434820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.166 [2024-07-15 19:08:16.434907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.166 [2024-07-15 19:08:16.434949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.166 [2024-07-15 19:08:16.435000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.166 [2024-07-15 19:08:16.435003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:37.100 [2024-07-15 19:08:17.433373] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.100 19:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:37.358 19:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:37.358 19:08:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:37.924 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:37.924 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:37.924 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:37.924 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.182 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:38.182 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:38.440 19:08:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.698 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:38.698 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:38.956 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:38.956 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:39.215 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:39.215 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:39.473 19:08:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:39.730 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:39.730 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:39.995 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:39.995 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:40.281 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.539 [2024-07-15 19:08:20.825509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.539 19:08:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:40.797 19:08:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:41.055 19:08:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.620 19:08:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:41.620 19:08:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:41.620 19:08:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.620 19:08:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:41.620 19:08:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:41.620 19:08:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:44.145 19:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:44.145 19:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:44.145 19:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.145 19:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:44.145 19:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.145 19:08:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:44.145 19:08:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:44.145 [global] 00:13:44.145 thread=1 00:13:44.145 invalidate=1 00:13:44.145 rw=write 00:13:44.145 time_based=1 00:13:44.145 runtime=1 00:13:44.145 ioengine=libaio 00:13:44.145 direct=1 00:13:44.145 bs=4096 00:13:44.145 iodepth=1 00:13:44.145 norandommap=0 00:13:44.145 numjobs=1 00:13:44.145 00:13:44.145 verify_dump=1 00:13:44.145 verify_backlog=512 00:13:44.145 verify_state_save=0 00:13:44.145 do_verify=1 00:13:44.145 verify=crc32c-intel 00:13:44.145 [job0] 00:13:44.145 filename=/dev/nvme0n1 00:13:44.145 [job1] 00:13:44.145 filename=/dev/nvme0n2 00:13:44.145 [job2] 00:13:44.145 filename=/dev/nvme0n3 00:13:44.145 [job3] 00:13:44.145 filename=/dev/nvme0n4 00:13:44.145 Could not set queue depth (nvme0n1) 00:13:44.145 Could not set queue depth (nvme0n2) 00:13:44.145 Could not set queue depth (nvme0n3) 00:13:44.145 Could not set queue depth (nvme0n4) 00:13:44.145 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.145 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.145 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.145 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.145 fio-3.35 00:13:44.145 Starting 4 threads 00:13:45.076 00:13:45.076 job0: (groupid=0, jobs=1): err= 0: pid=3290465: Mon Jul 15 19:08:25 2024 00:13:45.076 read: IOPS=1288, BW=5155KiB/s (5279kB/s)(5160KiB/1001msec) 00:13:45.076 slat (nsec): min=7304, max=61419, avg=13420.65, stdev=7058.81 00:13:45.076 clat (usec): min=300, max=694, avg=409.85, stdev=67.58 00:13:45.076 lat (usec): min=308, max=712, avg=423.28, stdev=68.50 00:13:45.076 clat percentiles (usec): 00:13:45.076 | 1.00th=[ 314], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 351], 00:13:45.076 | 30.00th=[ 367], 40.00th=[ 383], 50.00th=[ 400], 60.00th=[ 412], 00:13:45.076 | 70.00th=[ 433], 80.00th=[ 461], 90.00th=[ 506], 95.00th=[ 545], 00:13:45.076 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 693], 99.95th=[ 693], 00:13:45.076 | 99.99th=[ 693] 00:13:45.076 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:45.076 slat (nsec): min=7600, max=74437, avg=18006.68, stdev=9631.88 00:13:45.076 clat (usec): min=198, max=3794, avg=269.69, stdev=137.65 00:13:45.076 lat (usec): min=209, max=3804, avg=287.69, stdev=138.96 00:13:45.076 clat percentiles (usec): 00:13:45.076 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:13:45.076 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:13:45.076 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 363], 95.00th=[ 400], 00:13:45.076 | 99.00th=[ 478], 99.50th=[ 515], 99.90th=[ 2868], 99.95th=[ 3785], 00:13:45.076 | 99.99th=[ 3785] 00:13:45.076 bw ( KiB/s): min= 7328, max= 7328, per=41.59%, avg=7328.00, stdev= 0.00, samples=1 00:13:45.076 iops : min= 1832, max= 1832, avg=1832.00, stdev= 0.00, samples=1 00:13:45.076 lat (usec) : 250=31.99%, 500=62.81%, 750=5.06% 00:13:45.076 lat (msec) : 2=0.04%, 4=0.11% 00:13:45.076 cpu : usr=2.80%, sys=6.40%, ctx=2827, majf=0, minf=1 00:13:45.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.076 issued rwts: total=1290,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.076 job1: (groupid=0, jobs=1): err= 0: pid=3290466: Mon Jul 15 19:08:25 2024 00:13:45.076 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:45.076 slat (nsec): min=7438, max=59079, avg=14387.91, stdev=7583.03 00:13:45.076 clat (usec): min=450, max=816, avg=543.46, stdev=56.06 00:13:45.076 lat (usec): min=459, max=852, avg=557.84, stdev=60.01 00:13:45.076 clat percentiles (usec): 00:13:45.076 | 1.00th=[ 461], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 502], 00:13:45.076 | 30.00th=[ 510], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 537], 00:13:45.076 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 660], 00:13:45.077 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 791], 99.95th=[ 816], 00:13:45.077 | 99.99th=[ 816] 00:13:45.077 write: IOPS=1353, BW=5415KiB/s (5545kB/s)(5420KiB/1001msec); 0 zone resets 00:13:45.077 slat (nsec): min=7742, max=80034, avg=18795.07, stdev=11737.53 00:13:45.077 clat (usec): min=205, max=903, avg=289.64, stdev=64.34 00:13:45.077 lat (usec): min=215, max=912, avg=308.44, stdev=71.41 00:13:45.077 clat percentiles (usec): 00:13:45.077 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:13:45.077 | 30.00th=[ 243], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 297], 00:13:45.077 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 383], 95.00th=[ 408], 00:13:45.077 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 668], 99.95th=[ 906], 00:13:45.077 | 99.99th=[ 906] 00:13:45.077 bw ( KiB/s): min= 5512, max= 5512, per=31.28%, avg=5512.00, stdev= 0.00, samples=1 00:13:45.077 iops : min= 1378, max= 1378, avg=1378.00, stdev= 0.00, samples=1 00:13:45.077 lat (usec) : 250=20.13%, 500=45.36%, 750=34.38%, 1000=0.13% 00:13:45.077 cpu : usr=3.20%, sys=5.10%, ctx=2380, majf=0, minf=1 00:13:45.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.077 issued rwts: total=1024,1355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.077 job2: (groupid=0, jobs=1): err= 0: pid=3290467: Mon Jul 15 19:08:25 2024 00:13:45.077 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:45.077 slat (nsec): min=7310, max=81317, avg=22083.56, stdev=9510.80 00:13:45.077 clat (usec): min=389, max=791, avg=523.01, stdev=86.57 00:13:45.077 lat (usec): min=399, max=826, avg=545.10, stdev=91.20 00:13:45.077 clat percentiles (usec): 00:13:45.077 | 1.00th=[ 404], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:13:45.077 | 30.00th=[ 453], 40.00th=[ 474], 50.00th=[ 506], 60.00th=[ 529], 00:13:45.077 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 660], 95.00th=[ 685], 00:13:45.077 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 775], 99.95th=[ 791], 00:13:45.077 | 99.99th=[ 791] 00:13:45.077 write: IOPS=1185, BW=4743KiB/s (4857kB/s)(4748KiB/1001msec); 0 zone resets 00:13:45.077 slat (usec): min=7, max=1802, avg=25.15, stdev=67.47 00:13:45.077 clat (usec): min=212, max=1729, avg=336.00, stdev=86.12 00:13:45.077 lat (usec): min=222, max=2196, avg=361.15, stdev=114.09 00:13:45.077 clat percentiles (usec): 00:13:45.077 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 245], 20.00th=[ 265], 00:13:45.077 | 30.00th=[ 285], 40.00th=[ 314], 50.00th=[ 334], 60.00th=[ 355], 00:13:45.077 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 429], 95.00th=[ 457], 00:13:45.077 | 99.00th=[ 545], 99.50th=[ 627], 99.90th=[ 947], 99.95th=[ 1729], 00:13:45.077 | 99.99th=[ 1729] 00:13:45.077 bw ( KiB/s): min= 4312, max= 4312, per=24.47%, avg=4312.00, stdev= 0.00, samples=1 00:13:45.077 iops : min= 1078, max= 1078, avg=1078.00, stdev= 0.00, samples=1 00:13:45.077 lat (usec) : 250=7.37%, 500=66.98%, 750=25.15%, 1000=0.45% 00:13:45.077 lat (msec) : 2=0.05% 00:13:45.077 cpu : usr=2.20%, sys=5.40%, ctx=2214, majf=0, minf=1 00:13:45.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.077 issued rwts: total=1024,1187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.077 job3: (groupid=0, jobs=1): err= 0: pid=3290468: Mon Jul 15 19:08:25 2024 00:13:45.077 read: IOPS=22, BW=88.3KiB/s (90.4kB/s)(92.0KiB/1042msec) 00:13:45.077 slat (nsec): min=10887, max=21627, avg=17895.65, stdev=2105.52 00:13:45.077 clat (usec): min=506, max=42448, avg=38070.11, stdev=11861.20 00:13:45.077 lat (usec): min=523, max=42464, avg=38088.01, stdev=11861.07 00:13:45.077 clat percentiles (usec): 00:13:45.077 | 1.00th=[ 506], 5.00th=[ 519], 10.00th=[41157], 20.00th=[41157], 00:13:45.077 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:13:45.077 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:45.077 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:45.077 | 99.99th=[42206] 00:13:45.077 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:13:45.077 slat (usec): min=6, max=276, avg=20.68, stdev=20.14 00:13:45.077 clat (usec): min=214, max=1047, avg=298.94, stdev=85.48 00:13:45.077 lat (usec): min=222, max=1074, avg=319.63, stdev=92.60 00:13:45.077 clat percentiles (usec): 00:13:45.077 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 243], 00:13:45.077 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 293], 00:13:45.077 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 412], 00:13:45.077 | 99.00th=[ 775], 99.50th=[ 816], 99.90th=[ 1045], 99.95th=[ 1045], 00:13:45.077 | 99.99th=[ 1045] 00:13:45.077 bw ( KiB/s): min= 4096, max= 4096, per=23.25%, avg=4096.00, stdev= 0.00, samples=1 00:13:45.077 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:45.077 lat (usec) : 250=25.42%, 500=68.60%, 750=0.93%, 1000=0.93% 00:13:45.077 lat (msec) : 2=0.19%, 50=3.93% 00:13:45.077 cpu : usr=0.77%, sys=0.86%, ctx=537, majf=0, minf=2 00:13:45.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:45.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.077 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:45.077 00:13:45.077 Run status group 0 (all jobs): 00:13:45.077 READ: bw=12.6MiB/s (13.2MB/s), 88.3KiB/s-5155KiB/s (90.4kB/s-5279kB/s), io=13.1MiB (13.8MB), run=1001-1042msec 00:13:45.077 WRITE: bw=17.2MiB/s (18.0MB/s), 1965KiB/s-6138KiB/s (2013kB/s-6285kB/s), io=17.9MiB (18.8MB), run=1001-1042msec 00:13:45.077 00:13:45.077 Disk stats (read/write): 00:13:45.077 nvme0n1: ios=1073/1379, merge=0/0, ticks=528/362, in_queue=890, util=85.67% 00:13:45.077 nvme0n2: ios=938/1024, merge=0/0, ticks=1393/278, in_queue=1671, util=89.73% 00:13:45.077 nvme0n3: ios=890/1024, merge=0/0, ticks=532/328, in_queue=860, util=94.89% 00:13:45.077 nvme0n4: ios=71/512, merge=0/0, ticks=764/142, in_queue=906, util=95.68% 00:13:45.077 19:08:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:45.335 [global] 00:13:45.335 thread=1 00:13:45.335 invalidate=1 00:13:45.335 rw=randwrite 00:13:45.335 time_based=1 00:13:45.335 runtime=1 00:13:45.335 ioengine=libaio 00:13:45.335 direct=1 00:13:45.335 bs=4096 00:13:45.335 iodepth=1 00:13:45.335 norandommap=0 00:13:45.335 numjobs=1 00:13:45.335 00:13:45.335 verify_dump=1 00:13:45.335 verify_backlog=512 00:13:45.335 verify_state_save=0 00:13:45.335 do_verify=1 00:13:45.335 verify=crc32c-intel 00:13:45.335 [job0] 00:13:45.335 filename=/dev/nvme0n1 00:13:45.335 [job1] 00:13:45.335 filename=/dev/nvme0n2 00:13:45.335 [job2] 00:13:45.335 filename=/dev/nvme0n3 00:13:45.335 [job3] 00:13:45.335 filename=/dev/nvme0n4 00:13:45.335 Could not set queue depth (nvme0n1) 00:13:45.335 Could not set queue depth (nvme0n2) 00:13:45.335 Could not set queue depth (nvme0n3) 00:13:45.335 Could not set queue depth (nvme0n4) 00:13:45.335 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.335 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.335 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.335 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:45.335 fio-3.35 00:13:45.335 Starting 4 threads 00:13:46.715 00:13:46.715 job0: (groupid=0, jobs=1): err= 0: pid=3290694: Mon Jul 15 19:08:26 2024 00:13:46.715 read: IOPS=20, BW=83.5KiB/s (85.5kB/s)(84.0KiB/1006msec) 00:13:46.715 slat (nsec): min=15939, max=39336, avg=27519.00, stdev=10011.19 00:13:46.715 clat (usec): min=40892, max=41440, avg=40988.54, stdev=110.28 00:13:46.715 lat (usec): min=40908, max=41462, avg=41016.06, stdev=107.88 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:46.715 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:46.715 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:46.715 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:46.715 | 99.99th=[41681] 00:13:46.715 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:13:46.715 slat (nsec): min=7786, max=57325, avg=19646.80, stdev=7483.78 00:13:46.715 clat (usec): min=202, max=1382, avg=256.03, stdev=61.66 00:13:46.715 lat (usec): min=216, max=1395, avg=275.67, stdev=61.99 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 231], 20.00th=[ 235], 00:13:46.715 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:13:46.715 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 318], 00:13:46.715 | 99.00th=[ 449], 99.50th=[ 494], 99.90th=[ 1385], 99.95th=[ 1385], 00:13:46.715 | 99.99th=[ 1385] 00:13:46.715 bw ( KiB/s): min= 4096, max= 4096, per=33.87%, avg=4096.00, stdev= 0.00, samples=1 00:13:46.715 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:46.715 lat (usec) : 250=57.79%, 500=37.90%, 750=0.19% 00:13:46.715 lat (msec) : 2=0.19%, 50=3.94% 00:13:46.715 cpu : usr=1.09%, sys=1.00%, ctx=534, majf=0, minf=1 00:13:46.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.715 job1: (groupid=0, jobs=1): err= 0: pid=3290702: Mon Jul 15 19:08:26 2024 00:13:46.715 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:13:46.715 slat (nsec): min=15339, max=34010, avg=25082.81, stdev=8929.64 00:13:46.715 clat (usec): min=40886, max=41413, avg=40986.61, stdev=105.33 00:13:46.715 lat (usec): min=40919, max=41430, avg=41011.69, stdev=101.62 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:46.715 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:46.715 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:46.715 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:46.715 | 99.99th=[41157] 00:13:46.715 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:13:46.715 slat (nsec): min=6602, max=38668, avg=15461.36, stdev=5703.89 00:13:46.715 clat (usec): min=205, max=1350, avg=267.63, stdev=75.97 00:13:46.715 lat (usec): min=216, max=1364, avg=283.09, stdev=76.29 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 241], 00:13:46.715 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:13:46.715 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 355], 00:13:46.715 | 99.00th=[ 586], 99.50th=[ 701], 99.90th=[ 1352], 99.95th=[ 1352], 00:13:46.715 | 99.99th=[ 1352] 00:13:46.715 bw ( KiB/s): min= 4096, max= 4096, per=33.87%, avg=4096.00, stdev= 0.00, samples=1 00:13:46.715 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:46.715 lat (usec) : 250=29.83%, 500=64.35%, 750=1.50%, 1000=0.19% 00:13:46.715 lat (msec) : 2=0.19%, 50=3.94% 00:13:46.715 cpu : usr=0.20%, sys=1.09%, ctx=533, majf=0, minf=1 00:13:46.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.715 job2: (groupid=0, jobs=1): err= 0: pid=3290727: Mon Jul 15 19:08:26 2024 00:13:46.715 read: IOPS=1206, BW=4827KiB/s (4943kB/s)(4832KiB/1001msec) 00:13:46.715 slat (nsec): min=6385, max=69900, avg=23825.32, stdev=11747.35 00:13:46.715 clat (usec): min=335, max=593, avg=443.52, stdev=50.54 00:13:46.715 lat (usec): min=369, max=631, avg=467.35, stdev=56.41 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 392], 00:13:46.715 | 30.00th=[ 404], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 449], 00:13:46.715 | 70.00th=[ 465], 80.00th=[ 490], 90.00th=[ 519], 95.00th=[ 537], 00:13:46.715 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 586], 99.95th=[ 594], 00:13:46.715 | 99.99th=[ 594] 00:13:46.715 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:46.715 slat (nsec): min=6805, max=66640, avg=18168.19, stdev=10249.92 00:13:46.715 clat (usec): min=186, max=2799, avg=255.55, stdev=112.64 00:13:46.715 lat (usec): min=194, max=2822, avg=273.72, stdev=115.92 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:13:46.715 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 245], 00:13:46.715 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 367], 95.00th=[ 383], 00:13:46.715 | 99.00th=[ 519], 99.50th=[ 685], 99.90th=[ 2212], 99.95th=[ 2802], 00:13:46.715 | 99.99th=[ 2802] 00:13:46.715 bw ( KiB/s): min= 6192, max= 6192, per=51.20%, avg=6192.00, stdev= 0.00, samples=1 00:13:46.715 iops : min= 1548, max= 1548, avg=1548.00, stdev= 0.00, samples=1 00:13:46.715 lat (usec) : 250=37.35%, 500=54.77%, 750=7.62%, 1000=0.11% 00:13:46.715 lat (msec) : 2=0.07%, 4=0.07% 00:13:46.715 cpu : usr=2.70%, sys=6.30%, ctx=2745, majf=0, minf=2 00:13:46.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 issued rwts: total=1208,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.715 job3: (groupid=0, jobs=1): err= 0: pid=3290740: Mon Jul 15 19:08:26 2024 00:13:46.715 read: IOPS=20, BW=82.7KiB/s (84.7kB/s)(84.0KiB/1016msec) 00:13:46.715 slat (nsec): min=15278, max=33876, avg=25780.19, stdev=8932.44 00:13:46.715 clat (usec): min=40889, max=41178, avg=40976.03, stdev=64.15 00:13:46.715 lat (usec): min=40922, max=41194, avg=41001.81, stdev=59.55 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:46.715 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:46.715 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:46.715 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:46.715 | 99.99th=[41157] 00:13:46.715 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:13:46.715 slat (nsec): min=6413, max=46223, avg=14915.42, stdev=5447.96 00:13:46.715 clat (usec): min=219, max=2059, avg=282.33, stdev=157.22 00:13:46.715 lat (usec): min=229, max=2079, avg=297.24, stdev=158.07 00:13:46.715 clat percentiles (usec): 00:13:46.715 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 247], 00:13:46.715 | 30.00th=[ 251], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:13:46.715 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 326], 00:13:46.715 | 99.00th=[ 947], 99.50th=[ 1631], 99.90th=[ 2057], 99.95th=[ 2057], 00:13:46.715 | 99.99th=[ 2057] 00:13:46.715 bw ( KiB/s): min= 4096, max= 4096, per=33.87%, avg=4096.00, stdev= 0.00, samples=1 00:13:46.715 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:46.715 lat (usec) : 250=29.64%, 500=63.41%, 750=1.31%, 1000=0.75% 00:13:46.715 lat (msec) : 2=0.75%, 4=0.19%, 50=3.94% 00:13:46.715 cpu : usr=0.49%, sys=0.69%, ctx=533, majf=0, minf=1 00:13:46.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.715 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:46.715 00:13:46.715 Run status group 0 (all jobs): 00:13:46.715 READ: bw=5004KiB/s (5124kB/s), 82.7KiB/s-4827KiB/s (84.7kB/s-4943kB/s), io=5084KiB (5206kB), run=1001-1016msec 00:13:46.715 WRITE: bw=11.8MiB/s (12.4MB/s), 2016KiB/s-6138KiB/s (2064kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1016msec 00:13:46.715 00:13:46.715 Disk stats (read/write): 00:13:46.716 nvme0n1: ios=60/512, merge=0/0, ticks=902/120, in_queue=1022, util=99.10% 00:13:46.716 nvme0n2: ios=30/512, merge=0/0, ticks=711/127, in_queue=838, util=86.38% 00:13:46.716 nvme0n3: ios=1072/1182, merge=0/0, ticks=633/310, in_queue=943, util=99.06% 00:13:46.716 nvme0n4: ios=37/512, merge=0/0, ticks=688/136, in_queue=824, util=90.18% 00:13:46.716 19:08:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:46.716 [global] 00:13:46.716 thread=1 00:13:46.716 invalidate=1 00:13:46.716 rw=write 00:13:46.716 time_based=1 00:13:46.716 runtime=1 00:13:46.716 ioengine=libaio 00:13:46.716 direct=1 00:13:46.716 bs=4096 00:13:46.716 iodepth=128 00:13:46.716 norandommap=0 00:13:46.716 numjobs=1 00:13:46.716 00:13:46.716 verify_dump=1 00:13:46.716 verify_backlog=512 00:13:46.716 verify_state_save=0 00:13:46.716 do_verify=1 00:13:46.716 verify=crc32c-intel 00:13:46.716 [job0] 00:13:46.716 filename=/dev/nvme0n1 00:13:46.716 [job1] 00:13:46.716 filename=/dev/nvme0n2 00:13:46.716 [job2] 00:13:46.716 filename=/dev/nvme0n3 00:13:46.716 [job3] 00:13:46.716 filename=/dev/nvme0n4 00:13:46.716 Could not set queue depth (nvme0n1) 00:13:46.716 Could not set queue depth (nvme0n2) 00:13:46.716 Could not set queue depth (nvme0n3) 00:13:46.716 Could not set queue depth (nvme0n4) 00:13:46.977 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:46.977 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:46.977 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:46.977 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:46.977 fio-3.35 00:13:46.977 Starting 4 threads 00:13:48.347 00:13:48.347 job0: (groupid=0, jobs=1): err= 0: pid=3291044: Mon Jul 15 19:08:28 2024 00:13:48.347 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:13:48.347 slat (usec): min=3, max=11778, avg=104.59, stdev=575.49 00:13:48.347 clat (usec): min=7691, max=43263, avg=13876.30, stdev=4201.23 00:13:48.347 lat (usec): min=7703, max=43269, avg=13980.90, stdev=4235.74 00:13:48.347 clat percentiles (usec): 00:13:48.347 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:13:48.347 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12911], 60.00th=[13829], 00:13:48.347 | 70.00th=[14615], 80.00th=[15926], 90.00th=[18482], 95.00th=[20317], 00:13:48.347 | 99.00th=[32637], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:13:48.347 | 99.99th=[43254] 00:13:48.347 write: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1006msec); 0 zone resets 00:13:48.347 slat (usec): min=3, max=16922, avg=97.53, stdev=589.23 00:13:48.347 clat (usec): min=322, max=46916, avg=13204.03, stdev=6238.47 00:13:48.347 lat (usec): min=2263, max=46923, avg=13301.56, stdev=6252.68 00:13:48.347 clat percentiles (usec): 00:13:48.347 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10290], 00:13:48.347 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11863], 00:13:48.347 | 70.00th=[13042], 80.00th=[14484], 90.00th=[17433], 95.00th=[25560], 00:13:48.347 | 99.00th=[41681], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:13:48.347 | 99.99th=[46924] 00:13:48.347 bw ( KiB/s): min=16584, max=20480, per=27.61%, avg=18532.00, stdev=2754.89, samples=2 00:13:48.347 iops : min= 4146, max= 5120, avg=4633.00, stdev=688.72, samples=2 00:13:48.347 lat (usec) : 500=0.01% 00:13:48.347 lat (msec) : 4=0.09%, 10=9.93%, 20=83.52%, 50=6.46% 00:13:48.347 cpu : usr=7.96%, sys=9.75%, ctx=390, majf=0, minf=1 00:13:48.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:13:48.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.347 issued rwts: total=4608,4761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.347 job1: (groupid=0, jobs=1): err= 0: pid=3291045: Mon Jul 15 19:08:28 2024 00:13:48.347 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:13:48.347 slat (usec): min=3, max=28643, avg=92.99, stdev=745.18 00:13:48.347 clat (usec): min=5925, max=87241, avg=12801.44, stdev=9003.85 00:13:48.347 lat (usec): min=6859, max=87272, avg=12894.43, stdev=9067.65 00:13:48.347 clat percentiles (usec): 00:13:48.347 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:13:48.347 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10814], 00:13:48.347 | 70.00th=[11207], 80.00th=[12125], 90.00th=[13960], 95.00th=[26870], 00:13:48.347 | 99.00th=[58459], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:13:48.347 | 99.99th=[87557] 00:13:48.347 write: IOPS=5535, BW=21.6MiB/s (22.7MB/s)(21.8MiB/1006msec); 0 zone resets 00:13:48.347 slat (usec): min=4, max=11578, avg=81.71, stdev=539.41 00:13:48.347 clat (usec): min=813, max=30601, avg=11073.67, stdev=3441.08 00:13:48.347 lat (usec): min=4257, max=30618, avg=11155.37, stdev=3462.06 00:13:48.347 clat percentiles (usec): 00:13:48.347 | 1.00th=[ 5276], 5.00th=[ 7111], 10.00th=[ 8291], 20.00th=[ 9110], 00:13:48.347 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:13:48.347 | 70.00th=[10945], 80.00th=[12518], 90.00th=[16581], 95.00th=[19268], 00:13:48.347 | 99.00th=[21365], 99.50th=[22938], 99.90th=[23462], 99.95th=[28705], 00:13:48.347 | 99.99th=[30540] 00:13:48.347 bw ( KiB/s): min=18952, max=24576, per=32.43%, avg=21764.00, stdev=3976.77, samples=2 00:13:48.347 iops : min= 4738, max= 6144, avg=5441.00, stdev=994.19, samples=2 00:13:48.347 lat (usec) : 1000=0.01% 00:13:48.347 lat (msec) : 10=32.78%, 20=62.09%, 50=4.21%, 100=0.91% 00:13:48.347 cpu : usr=7.36%, sys=14.33%, ctx=382, majf=0, minf=1 00:13:48.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:48.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.347 issued rwts: total=5120,5569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.347 job2: (groupid=0, jobs=1): err= 0: pid=3291047: Mon Jul 15 19:08:28 2024 00:13:48.347 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:13:48.347 slat (usec): min=2, max=19109, avg=135.17, stdev=965.67 00:13:48.347 clat (usec): min=4089, max=41197, avg=19123.71, stdev=5076.61 00:13:48.347 lat (usec): min=4098, max=41219, avg=19258.88, stdev=5111.61 00:13:48.347 clat percentiles (usec): 00:13:48.347 | 1.00th=[ 6783], 5.00th=[11600], 10.00th=[13042], 20.00th=[15008], 00:13:48.348 | 30.00th=[16712], 40.00th=[17957], 50.00th=[18482], 60.00th=[19268], 00:13:48.348 | 70.00th=[21627], 80.00th=[22676], 90.00th=[26084], 95.00th=[27395], 00:13:48.348 | 99.00th=[35390], 99.50th=[35390], 99.90th=[40633], 99.95th=[40633], 00:13:48.348 | 99.99th=[41157] 00:13:48.348 write: IOPS=3539, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1006msec); 0 zone resets 00:13:48.348 slat (usec): min=3, max=20484, avg=150.61, stdev=990.11 00:13:48.348 clat (usec): min=5669, max=58784, avg=18442.28, stdev=9074.16 00:13:48.348 lat (usec): min=5687, max=58803, avg=18592.89, stdev=9121.15 00:13:48.348 clat percentiles (usec): 00:13:48.348 | 1.00th=[ 7701], 5.00th=[10159], 10.00th=[10945], 20.00th=[13304], 00:13:48.348 | 30.00th=[14091], 40.00th=[15401], 50.00th=[16057], 60.00th=[17171], 00:13:48.348 | 70.00th=[18744], 80.00th=[20055], 90.00th=[29230], 95.00th=[42206], 00:13:48.348 | 99.00th=[55313], 99.50th=[56361], 99.90th=[58983], 99.95th=[58983], 00:13:48.348 | 99.99th=[58983] 00:13:48.348 bw ( KiB/s): min=12288, max=15184, per=20.47%, avg=13736.00, stdev=2047.78, samples=2 00:13:48.348 iops : min= 3072, max= 3796, avg=3434.00, stdev=511.95, samples=2 00:13:48.348 lat (msec) : 10=2.80%, 20=68.40%, 50=27.50%, 100=1.30% 00:13:48.348 cpu : usr=5.17%, sys=6.77%, ctx=261, majf=0, minf=1 00:13:48.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:48.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.348 issued rwts: total=3072,3561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.348 job3: (groupid=0, jobs=1): err= 0: pid=3291048: Mon Jul 15 19:08:28 2024 00:13:48.348 read: IOPS=2648, BW=10.3MiB/s (10.8MB/s)(10.5MiB/1011msec) 00:13:48.348 slat (usec): min=2, max=31959, avg=188.66, stdev=1699.78 00:13:48.348 clat (usec): min=2500, max=97847, avg=25597.85, stdev=19317.26 00:13:48.348 lat (usec): min=2509, max=97882, avg=25786.51, stdev=19482.49 00:13:48.348 clat percentiles (usec): 00:13:48.348 | 1.00th=[ 7767], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[12518], 00:13:48.348 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14615], 60.00th=[17957], 00:13:48.348 | 70.00th=[26608], 80.00th=[45876], 90.00th=[57410], 95.00th=[65799], 00:13:48.348 | 99.00th=[79168], 99.50th=[79168], 99.90th=[86508], 99.95th=[98042], 00:13:48.348 | 99.99th=[98042] 00:13:48.348 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:13:48.348 slat (usec): min=3, max=30669, avg=141.36, stdev=1152.77 00:13:48.348 clat (usec): min=3215, max=79611, avg=19335.45, stdev=11604.45 00:13:48.348 lat (usec): min=3222, max=79627, avg=19476.81, stdev=11689.27 00:13:48.348 clat percentiles (usec): 00:13:48.348 | 1.00th=[ 5145], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[12518], 00:13:48.348 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[17433], 00:13:48.348 | 70.00th=[19530], 80.00th=[24773], 90.00th=[37487], 95.00th=[43254], 00:13:48.348 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63177], 99.95th=[79168], 00:13:48.348 | 99.99th=[79168] 00:13:48.348 bw ( KiB/s): min=12216, max=12288, per=18.26%, avg=12252.00, stdev=50.91, samples=2 00:13:48.348 iops : min= 3054, max= 3072, avg=3063.00, stdev=12.73, samples=2 00:13:48.348 lat (msec) : 4=0.28%, 10=9.70%, 20=57.83%, 50=23.46%, 100=8.73% 00:13:48.348 cpu : usr=3.37%, sys=4.55%, ctx=234, majf=0, minf=1 00:13:48.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:48.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:48.348 issued rwts: total=2678,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:48.348 00:13:48.348 Run status group 0 (all jobs): 00:13:48.348 READ: bw=59.8MiB/s (62.7MB/s), 10.3MiB/s-19.9MiB/s (10.8MB/s-20.8MB/s), io=60.5MiB (63.4MB), run=1006-1011msec 00:13:48.348 WRITE: bw=65.5MiB/s (68.7MB/s), 11.9MiB/s-21.6MiB/s (12.4MB/s-22.7MB/s), io=66.3MiB (69.5MB), run=1006-1011msec 00:13:48.348 00:13:48.348 Disk stats (read/write): 00:13:48.348 nvme0n1: ios=4122/4159, merge=0/0, ticks=24339/18877, in_queue=43216, util=97.80% 00:13:48.348 nvme0n2: ios=4832/5120, merge=0/0, ticks=27722/29139, in_queue=56861, util=97.46% 00:13:48.348 nvme0n3: ios=2583/2692, merge=0/0, ticks=41364/43018, in_queue=84382, util=97.80% 00:13:48.348 nvme0n4: ios=2090/2161, merge=0/0, ticks=36081/19890, in_queue=55971, util=95.66% 00:13:48.348 19:08:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:48.348 [global] 00:13:48.348 thread=1 00:13:48.348 invalidate=1 00:13:48.348 rw=randwrite 00:13:48.348 time_based=1 00:13:48.348 runtime=1 00:13:48.348 ioengine=libaio 00:13:48.348 direct=1 00:13:48.348 bs=4096 00:13:48.348 iodepth=128 00:13:48.348 norandommap=0 00:13:48.348 numjobs=1 00:13:48.348 00:13:48.348 verify_dump=1 00:13:48.348 verify_backlog=512 00:13:48.348 verify_state_save=0 00:13:48.348 do_verify=1 00:13:48.348 verify=crc32c-intel 00:13:48.348 [job0] 00:13:48.348 filename=/dev/nvme0n1 00:13:48.348 [job1] 00:13:48.348 filename=/dev/nvme0n2 00:13:48.348 [job2] 00:13:48.348 filename=/dev/nvme0n3 00:13:48.348 [job3] 00:13:48.348 filename=/dev/nvme0n4 00:13:48.348 Could not set queue depth (nvme0n1) 00:13:48.348 Could not set queue depth (nvme0n2) 00:13:48.348 Could not set queue depth (nvme0n3) 00:13:48.348 Could not set queue depth (nvme0n4) 00:13:48.348 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.348 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.348 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.348 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:48.348 fio-3.35 00:13:48.348 Starting 4 threads 00:13:49.716 00:13:49.716 job0: (groupid=0, jobs=1): err= 0: pid=3291278: Mon Jul 15 19:08:29 2024 00:13:49.716 read: IOPS=2898, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1004msec) 00:13:49.716 slat (usec): min=2, max=21787, avg=149.07, stdev=1069.23 00:13:49.716 clat (usec): min=512, max=57213, avg=19589.10, stdev=12034.93 00:13:49.716 lat (usec): min=3897, max=64207, avg=19738.17, stdev=12117.69 00:13:49.716 clat percentiles (usec): 00:13:49.716 | 1.00th=[ 4113], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 9765], 00:13:49.716 | 30.00th=[11338], 40.00th=[11863], 50.00th=[15270], 60.00th=[19268], 00:13:49.716 | 70.00th=[22414], 80.00th=[31327], 90.00th=[39060], 95.00th=[43779], 00:13:49.716 | 99.00th=[50594], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:13:49.716 | 99.99th=[57410] 00:13:49.716 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:13:49.716 slat (usec): min=3, max=22363, avg=150.85, stdev=1115.86 00:13:49.716 clat (usec): min=2079, max=82136, avg=22952.91, stdev=16890.67 00:13:49.716 lat (usec): min=2091, max=90805, avg=23103.76, stdev=17003.83 00:13:49.716 clat percentiles (usec): 00:13:49.716 | 1.00th=[ 3130], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 9241], 00:13:49.716 | 30.00th=[10945], 40.00th=[15664], 50.00th=[17695], 60.00th=[22414], 00:13:49.716 | 70.00th=[26870], 80.00th=[33817], 90.00th=[46400], 95.00th=[61080], 00:13:49.716 | 99.00th=[78119], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:13:49.716 | 99.99th=[82314] 00:13:49.716 bw ( KiB/s): min= 8192, max=16384, per=20.90%, avg=12288.00, stdev=5792.62, samples=2 00:13:49.716 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:13:49.716 lat (usec) : 750=0.02% 00:13:49.716 lat (msec) : 4=1.04%, 10=23.27%, 20=34.59%, 50=35.31%, 100=5.78% 00:13:49.716 cpu : usr=2.89%, sys=4.99%, ctx=276, majf=0, minf=15 00:13:49.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:49.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.716 issued rwts: total=2910,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.716 job1: (groupid=0, jobs=1): err= 0: pid=3291279: Mon Jul 15 19:08:29 2024 00:13:49.716 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1006msec) 00:13:49.716 slat (usec): min=3, max=6565, avg=101.19, stdev=572.52 00:13:49.716 clat (usec): min=5201, max=26168, avg=13522.58, stdev=3696.83 00:13:49.716 lat (usec): min=6429, max=26209, avg=13623.76, stdev=3732.35 00:13:49.716 clat percentiles (usec): 00:13:49.716 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10290], 00:13:49.716 | 30.00th=[10683], 40.00th=[11207], 50.00th=[12125], 60.00th=[14222], 00:13:49.716 | 70.00th=[15795], 80.00th=[17171], 90.00th=[18744], 95.00th=[20317], 00:13:49.716 | 99.00th=[22676], 99.50th=[23725], 99.90th=[25035], 99.95th=[25035], 00:13:49.716 | 99.99th=[26084] 00:13:49.716 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:13:49.716 slat (usec): min=3, max=7927, avg=91.98, stdev=548.76 00:13:49.716 clat (usec): min=5601, max=27897, avg=12596.37, stdev=3128.75 00:13:49.716 lat (usec): min=5613, max=27905, avg=12688.36, stdev=3170.84 00:13:49.716 clat percentiles (usec): 00:13:49.716 | 1.00th=[ 6849], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10159], 00:13:49.716 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12518], 00:13:49.716 | 70.00th=[13829], 80.00th=[14746], 90.00th=[16909], 95.00th=[18220], 00:13:49.716 | 99.00th=[22676], 99.50th=[25560], 99.90th=[27919], 99.95th=[27919], 00:13:49.716 | 99.99th=[27919] 00:13:49.716 bw ( KiB/s): min=16384, max=23744, per=34.13%, avg=20064.00, stdev=5204.31, samples=2 00:13:49.716 iops : min= 4096, max= 5936, avg=5016.00, stdev=1301.08, samples=2 00:13:49.716 lat (msec) : 10=14.35%, 20=81.50%, 50=4.15% 00:13:49.716 cpu : usr=8.16%, sys=11.34%, ctx=347, majf=0, minf=15 00:13:49.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:49.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.716 issued rwts: total=4631,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.716 job2: (groupid=0, jobs=1): err= 0: pid=3291280: Mon Jul 15 19:08:29 2024 00:13:49.716 read: IOPS=2664, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1008msec) 00:13:49.716 slat (usec): min=3, max=20501, avg=178.79, stdev=1252.27 00:13:49.716 clat (usec): min=1007, max=98332, avg=20529.64, stdev=13090.91 00:13:49.716 lat (usec): min=5464, max=98351, avg=20708.43, stdev=13207.58 00:13:49.716 clat percentiles (usec): 00:13:49.716 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[10683], 20.00th=[12387], 00:13:49.716 | 30.00th=[13173], 40.00th=[14484], 50.00th=[16188], 60.00th=[17171], 00:13:49.716 | 70.00th=[21627], 80.00th=[28181], 90.00th=[35390], 95.00th=[46924], 00:13:49.716 | 99.00th=[80217], 99.50th=[86508], 99.90th=[98042], 99.95th=[98042], 00:13:49.716 | 99.99th=[98042] 00:13:49.716 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:13:49.716 slat (usec): min=4, max=32159, avg=160.32, stdev=1139.66 00:13:49.716 clat (usec): min=1533, max=102681, avg=23667.53, stdev=21957.61 00:13:49.716 lat (usec): min=1545, max=102691, avg=23827.85, stdev=22080.24 00:13:49.716 clat percentiles (msec): 00:13:49.716 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 11], 00:13:49.716 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:13:49.716 | 70.00th=[ 19], 80.00th=[ 29], 90.00th=[ 64], 95.00th=[ 75], 00:13:49.716 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 103], 00:13:49.716 | 99.99th=[ 103] 00:13:49.716 bw ( KiB/s): min= 8176, max=16384, per=20.89%, avg=12280.00, stdev=5803.93, samples=2 00:13:49.716 iops : min= 2044, max= 4096, avg=3070.00, stdev=1450.98, samples=2 00:13:49.716 lat (msec) : 2=0.05%, 4=0.12%, 10=11.38%, 20=58.02%, 50=21.99% 00:13:49.716 lat (msec) : 100=8.32%, 250=0.12% 00:13:49.716 cpu : usr=4.57%, sys=6.06%, ctx=288, majf=0, minf=9 00:13:49.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:49.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.716 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.716 job3: (groupid=0, jobs=1): err= 0: pid=3291281: Mon Jul 15 19:08:29 2024 00:13:49.716 read: IOPS=3665, BW=14.3MiB/s (15.0MB/s)(15.0MiB/1045msec) 00:13:49.716 slat (usec): min=2, max=8172, avg=121.03, stdev=643.07 00:13:49.716 clat (usec): min=8741, max=65146, avg=16684.98, stdev=9546.51 00:13:49.716 lat (usec): min=9081, max=65151, avg=16806.01, stdev=9580.30 00:13:49.716 clat percentiles (usec): 00:13:49.716 | 1.00th=[10159], 5.00th=[11600], 10.00th=[12256], 20.00th=[12780], 00:13:49.716 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[14222], 00:13:49.716 | 70.00th=[14746], 80.00th=[16188], 90.00th=[22414], 95.00th=[43254], 00:13:49.716 | 99.00th=[59507], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:13:49.716 | 99.99th=[65274] 00:13:49.716 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:13:49.716 slat (usec): min=3, max=9081, avg=120.47, stdev=701.60 00:13:49.716 clat (usec): min=8472, max=41883, avg=16645.08, stdev=7352.82 00:13:49.716 lat (usec): min=8488, max=41895, avg=16765.55, stdev=7385.42 00:13:49.716 clat percentiles (usec): 00:13:49.716 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[11600], 20.00th=[12125], 00:13:49.716 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[13829], 00:13:49.716 | 70.00th=[15008], 80.00th=[22938], 90.00th=[29230], 95.00th=[32375], 00:13:49.716 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:49.716 | 99.99th=[41681] 00:13:49.716 bw ( KiB/s): min=14152, max=18616, per=27.87%, avg=16384.00, stdev=3156.52, samples=2 00:13:49.716 iops : min= 3538, max= 4654, avg=4096.00, stdev=789.13, samples=2 00:13:49.716 lat (msec) : 10=2.18%, 20=80.03%, 50=16.60%, 100=1.19% 00:13:49.716 cpu : usr=4.98%, sys=9.20%, ctx=309, majf=0, minf=11 00:13:49.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:49.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.716 issued rwts: total=3830,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.716 00:13:49.716 Run status group 0 (all jobs): 00:13:49.716 READ: bw=52.5MiB/s (55.1MB/s), 10.4MiB/s-18.0MiB/s (10.9MB/s-18.9MB/s), io=54.9MiB (57.6MB), run=1004-1045msec 00:13:49.716 WRITE: bw=57.4MiB/s (60.2MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.8MB/s), io=60.0MiB (62.9MB), run=1004-1045msec 00:13:49.716 00:13:49.716 Disk stats (read/write): 00:13:49.716 nvme0n1: ios=2089/2439, merge=0/0, ticks=22520/35798, in_queue=58318, util=99.50% 00:13:49.716 nvme0n2: ios=4128/4148, merge=0/0, ticks=27323/22925, in_queue=50248, util=90.96% 00:13:49.716 nvme0n3: ios=2482/2560, merge=0/0, ticks=48223/55866, in_queue=104089, util=92.07% 00:13:49.716 nvme0n4: ios=3219/3584, merge=0/0, ticks=16292/15868, in_queue=32160, util=97.58% 00:13:49.716 19:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:49.716 19:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3291417 00:13:49.716 19:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:49.716 19:08:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:49.716 [global] 00:13:49.716 thread=1 00:13:49.716 invalidate=1 00:13:49.716 rw=read 00:13:49.716 time_based=1 00:13:49.716 runtime=10 00:13:49.716 ioengine=libaio 00:13:49.716 direct=1 00:13:49.716 bs=4096 00:13:49.716 iodepth=1 00:13:49.716 norandommap=1 00:13:49.716 numjobs=1 00:13:49.716 00:13:49.716 [job0] 00:13:49.716 filename=/dev/nvme0n1 00:13:49.716 [job1] 00:13:49.716 filename=/dev/nvme0n2 00:13:49.716 [job2] 00:13:49.716 filename=/dev/nvme0n3 00:13:49.716 [job3] 00:13:49.716 filename=/dev/nvme0n4 00:13:49.716 Could not set queue depth (nvme0n1) 00:13:49.716 Could not set queue depth (nvme0n2) 00:13:49.716 Could not set queue depth (nvme0n3) 00:13:49.716 Could not set queue depth (nvme0n4) 00:13:49.716 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.716 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.716 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.716 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.716 fio-3.35 00:13:49.716 Starting 4 threads 00:13:52.992 19:08:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:52.992 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=26238976, buflen=4096 00:13:52.992 fio: pid=3291508, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:52.992 19:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:52.992 19:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:52.992 19:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:52.992 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=25477120, buflen=4096 00:13:52.992 fio: pid=3291507, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:53.249 19:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.249 19:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:53.249 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10657792, buflen=4096 00:13:53.249 fio: pid=3291505, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:53.508 19:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.508 19:08:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:53.508 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=385024, buflen=4096 00:13:53.508 fio: pid=3291506, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:53.766 00:13:53.767 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3291505: Mon Jul 15 19:08:33 2024 00:13:53.767 read: IOPS=759, BW=3035KiB/s (3108kB/s)(10.2MiB/3429msec) 00:13:53.767 slat (usec): min=4, max=13854, avg=23.58, stdev=369.72 00:13:53.767 clat (usec): min=294, max=41436, avg=1282.18, stdev=6064.65 00:13:53.767 lat (usec): min=299, max=55036, avg=1300.84, stdev=6107.26 00:13:53.767 clat percentiles (usec): 00:13:53.767 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:13:53.767 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 351], 00:13:53.767 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 400], 95.00th=[ 429], 00:13:53.767 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:53.767 | 99.99th=[41681] 00:13:53.767 bw ( KiB/s): min= 696, max= 9696, per=17.76%, avg=2944.67, stdev=3350.49, samples=6 00:13:53.767 iops : min= 174, max= 2424, avg=736.17, stdev=837.62, samples=6 00:13:53.767 lat (usec) : 500=96.47%, 750=1.08%, 1000=0.04% 00:13:53.767 lat (msec) : 10=0.08%, 20=0.04%, 50=2.27% 00:13:53.767 cpu : usr=0.41%, sys=1.20%, ctx=2605, majf=0, minf=1 00:13:53.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 issued rwts: total=2603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:53.767 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3291506: Mon Jul 15 19:08:33 2024 00:13:53.767 read: IOPS=25, BW=102KiB/s (104kB/s)(376KiB/3697msec) 00:13:53.767 slat (usec): min=12, max=19850, avg=461.93, stdev=2388.35 00:13:53.767 clat (usec): min=440, max=41480, avg=38850.69, stdev=9125.25 00:13:53.767 lat (usec): min=470, max=60954, avg=39238.64, stdev=9497.32 00:13:53.767 clat percentiles (usec): 00:13:53.767 | 1.00th=[ 441], 5.00th=[ 709], 10.00th=[40633], 20.00th=[41157], 00:13:53.767 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:53.767 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:53.767 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:53.767 | 99.99th=[41681] 00:13:53.767 bw ( KiB/s): min= 87, max= 120, per=0.61%, avg=101.43, stdev=12.78, samples=7 00:13:53.767 iops : min= 21, max= 30, avg=25.14, stdev= 3.18, samples=7 00:13:53.767 lat (usec) : 500=1.05%, 750=4.21% 00:13:53.767 lat (msec) : 50=93.68% 00:13:53.767 cpu : usr=0.08%, sys=0.19%, ctx=100, majf=0, minf=1 00:13:53.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:53.767 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3291507: Mon Jul 15 19:08:33 2024 00:13:53.767 read: IOPS=1968, BW=7871KiB/s (8060kB/s)(24.3MiB/3161msec) 00:13:53.767 slat (usec): min=5, max=7583, avg=13.84, stdev=126.50 00:13:53.767 clat (usec): min=285, max=41020, avg=487.67, stdev=2415.38 00:13:53.767 lat (usec): min=291, max=41055, avg=501.51, stdev=2419.70 00:13:53.767 clat percentiles (usec): 00:13:53.767 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:13:53.767 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 347], 00:13:53.767 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 379], 00:13:53.767 | 99.00th=[ 441], 99.50th=[ 486], 99.90th=[41157], 99.95th=[41157], 00:13:53.767 | 99.99th=[41157] 00:13:53.767 bw ( KiB/s): min= 103, max=11856, per=47.14%, avg=7815.83, stdev=5300.99, samples=6 00:13:53.767 iops : min= 25, max= 2964, avg=1953.83, stdev=1325.47, samples=6 00:13:53.767 lat (usec) : 500=99.50%, 750=0.08%, 1000=0.02% 00:13:53.767 lat (msec) : 2=0.02%, 20=0.02%, 50=0.35% 00:13:53.767 cpu : usr=1.49%, sys=3.51%, ctx=6225, majf=0, minf=1 00:13:53.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 issued rwts: total=6221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:53.767 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3291508: Mon Jul 15 19:08:33 2024 00:13:53.767 read: IOPS=2219, BW=8876KiB/s (9089kB/s)(25.0MiB/2887msec) 00:13:53.767 slat (nsec): min=5880, max=59667, avg=13193.33, stdev=5394.69 00:13:53.767 clat (usec): min=295, max=41078, avg=432.48, stdev=1133.72 00:13:53.767 lat (usec): min=301, max=41088, avg=445.68, stdev=1133.74 00:13:53.767 clat percentiles (usec): 00:13:53.767 | 1.00th=[ 310], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 363], 00:13:53.767 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 400], 60.00th=[ 412], 00:13:53.767 | 70.00th=[ 424], 80.00th=[ 433], 90.00th=[ 449], 95.00th=[ 478], 00:13:53.767 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 1172], 99.95th=[41157], 00:13:53.767 | 99.99th=[41157] 00:13:53.767 bw ( KiB/s): min= 6208, max=10123, per=53.10%, avg=8803.80, stdev=1515.57, samples=5 00:13:53.767 iops : min= 1552, max= 2530, avg=2200.80, stdev=378.73, samples=5 00:13:53.767 lat (usec) : 500=98.42%, 750=1.44%, 1000=0.02% 00:13:53.767 lat (msec) : 2=0.02%, 4=0.02%, 50=0.08% 00:13:53.767 cpu : usr=1.87%, sys=4.71%, ctx=6407, majf=0, minf=1 00:13:53.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:53.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.767 issued rwts: total=6407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:53.767 00:13:53.767 Run status group 0 (all jobs): 00:13:53.767 READ: bw=16.2MiB/s (17.0MB/s), 102KiB/s-8876KiB/s (104kB/s-9089kB/s), io=59.9MiB (62.8MB), run=2887-3697msec 00:13:53.767 00:13:53.767 Disk stats (read/write): 00:13:53.767 nvme0n1: ios=2600/0, merge=0/0, ticks=3228/0, in_queue=3228, util=95.42% 00:13:53.767 nvme0n2: ios=130/0, merge=0/0, ticks=4493/0, in_queue=4493, util=99.25% 00:13:53.767 nvme0n3: ios=6146/0, merge=0/0, ticks=4119/0, in_queue=4119, util=98.47% 00:13:53.767 nvme0n4: ios=6337/0, merge=0/0, ticks=2678/0, in_queue=2678, util=96.71% 00:13:53.767 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:53.767 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:54.025 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.025 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:54.283 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.283 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:54.541 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:54.541 19:08:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:54.801 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:54.801 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3291417 00:13:54.801 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:54.801 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:55.060 nvmf hotplug test: fio failed as expected 00:13:55.060 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.318 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.318 rmmod nvme_tcp 00:13:55.318 rmmod nvme_fabrics 00:13:55.318 rmmod nvme_keyring 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3289378 ']' 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3289378 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3289378 ']' 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3289378 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3289378 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3289378' 00:13:55.319 killing process with pid 3289378 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3289378 00:13:55.319 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3289378 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.577 19:08:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.153 19:08:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.153 00:13:58.153 real 0m24.017s 00:13:58.153 user 1m22.667s 00:13:58.153 sys 0m7.386s 00:13:58.153 19:08:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.153 19:08:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.153 ************************************ 00:13:58.153 END TEST nvmf_fio_target 00:13:58.153 ************************************ 00:13:58.153 19:08:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:58.153 19:08:38 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:58.153 19:08:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.153 19:08:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.153 19:08:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.153 ************************************ 00:13:58.153 START TEST nvmf_bdevio 00:13:58.153 ************************************ 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:58.153 * Looking for test storage... 00:13:58.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.153 19:08:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:59.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:59.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:59.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:59.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.536 19:08:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:59.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:13:59.794 00:13:59.794 --- 10.0.0.2 ping statistics --- 00:13:59.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.794 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:13:59.794 00:13:59.794 --- 10.0.0.1 ping statistics --- 00:13:59.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.794 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3294129 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3294129 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3294129 ']' 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.794 19:08:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:59.794 [2024-07-15 19:08:40.160075] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:13:59.794 [2024-07-15 19:08:40.160156] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.794 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.052 [2024-07-15 19:08:40.231187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.052 [2024-07-15 19:08:40.349841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.052 [2024-07-15 19:08:40.349907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.052 [2024-07-15 19:08:40.349925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.052 [2024-07-15 19:08:40.349938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.052 [2024-07-15 19:08:40.349949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.052 [2024-07-15 19:08:40.350041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:00.052 [2024-07-15 19:08:40.350117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:00.052 [2024-07-15 19:08:40.350168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:00.052 [2024-07-15 19:08:40.350172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.989 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.989 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:00.989 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.989 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.989 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:00.990 [2024-07-15 19:08:41.121615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:00.990 Malloc0 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:00.990 [2024-07-15 19:08:41.173523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:00.990 { 00:14:00.990 "params": { 00:14:00.990 "name": "Nvme$subsystem", 00:14:00.990 "trtype": "$TEST_TRANSPORT", 00:14:00.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:00.990 "adrfam": "ipv4", 00:14:00.990 "trsvcid": "$NVMF_PORT", 00:14:00.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:00.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:00.990 "hdgst": ${hdgst:-false}, 00:14:00.990 "ddgst": ${ddgst:-false} 00:14:00.990 }, 00:14:00.990 "method": "bdev_nvme_attach_controller" 00:14:00.990 } 00:14:00.990 EOF 00:14:00.990 )") 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:00.990 19:08:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:00.990 "params": { 00:14:00.990 "name": "Nvme1", 00:14:00.990 "trtype": "tcp", 00:14:00.990 "traddr": "10.0.0.2", 00:14:00.990 "adrfam": "ipv4", 00:14:00.990 "trsvcid": "4420", 00:14:00.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.990 "hdgst": false, 00:14:00.990 "ddgst": false 00:14:00.990 }, 00:14:00.990 "method": "bdev_nvme_attach_controller" 00:14:00.990 }' 00:14:00.990 [2024-07-15 19:08:41.220459] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:14:00.990 [2024-07-15 19:08:41.220547] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3294286 ] 00:14:00.990 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.990 [2024-07-15 19:08:41.282078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:00.990 [2024-07-15 19:08:41.398293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.990 [2024-07-15 19:08:41.398344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.990 [2024-07-15 19:08:41.398347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.248 I/O targets: 00:14:01.248 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:01.248 00:14:01.248 00:14:01.248 CUnit - A unit testing framework for C - Version 2.1-3 00:14:01.248 http://cunit.sourceforge.net/ 00:14:01.248 00:14:01.248 00:14:01.248 Suite: bdevio tests on: Nvme1n1 00:14:01.248 Test: blockdev write read block ...passed 00:14:01.507 Test: blockdev write zeroes read block ...passed 00:14:01.507 Test: blockdev write zeroes read no split ...passed 00:14:01.507 Test: blockdev write zeroes read split ...passed 00:14:01.507 Test: blockdev write zeroes read split partial ...passed 00:14:01.507 Test: blockdev reset ...[2024-07-15 19:08:41.839803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:01.507 [2024-07-15 19:08:41.839916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19dd580 (9): Bad file descriptor 00:14:01.507 [2024-07-15 19:08:41.855698] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:01.507 passed 00:14:01.507 Test: blockdev write read 8 blocks ...passed 00:14:01.507 Test: blockdev write read size > 128k ...passed 00:14:01.507 Test: blockdev write read invalid size ...passed 00:14:01.765 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:01.765 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:01.765 Test: blockdev write read max offset ...passed 00:14:01.765 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:01.765 Test: blockdev writev readv 8 blocks ...passed 00:14:01.765 Test: blockdev writev readv 30 x 1block ...passed 00:14:01.765 Test: blockdev writev readv block ...passed 00:14:01.765 Test: blockdev writev readv size > 128k ...passed 00:14:01.765 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:01.765 Test: blockdev comparev and writev ...[2024-07-15 19:08:42.113791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.113825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:01.765 [2024-07-15 19:08:42.113849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.113866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:01.765 [2024-07-15 19:08:42.114266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.114290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:01.765 [2024-07-15 19:08:42.114312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.114329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:01.765 [2024-07-15 19:08:42.114713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.114737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:01.765 [2024-07-15 19:08:42.114759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.114775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:01.765 [2024-07-15 19:08:42.115147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.115170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:01.765 [2024-07-15 19:08:42.115192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:01.765 [2024-07-15 19:08:42.115208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:01.765 passed 00:14:02.025 Test: blockdev nvme passthru rw ...passed 00:14:02.025 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:08:42.198251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.025 [2024-07-15 19:08:42.198278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:02.025 [2024-07-15 19:08:42.198482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.025 [2024-07-15 19:08:42.198505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:02.025 [2024-07-15 19:08:42.198727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.025 [2024-07-15 19:08:42.198751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:02.025 [2024-07-15 19:08:42.198955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:02.025 [2024-07-15 19:08:42.198978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:02.025 passed 00:14:02.025 Test: blockdev nvme admin passthru ...passed 00:14:02.025 Test: blockdev copy ...passed 00:14:02.025 00:14:02.025 Run Summary: Type Total Ran Passed Failed Inactive 00:14:02.025 suites 1 1 n/a 0 0 00:14:02.025 tests 23 23 23 0 0 00:14:02.025 asserts 152 152 152 0 n/a 00:14:02.025 00:14:02.025 Elapsed time = 1.248 seconds 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.286 rmmod nvme_tcp 00:14:02.286 rmmod nvme_fabrics 00:14:02.286 rmmod nvme_keyring 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3294129 ']' 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3294129 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3294129 ']' 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3294129 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3294129 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3294129' 00:14:02.286 killing process with pid 3294129 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3294129 00:14:02.286 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3294129 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.545 19:08:42 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.080 19:08:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.080 00:14:05.080 real 0m6.898s 00:14:05.080 user 0m13.198s 00:14:05.080 sys 0m1.987s 00:14:05.080 19:08:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:05.080 19:08:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.080 ************************************ 00:14:05.080 END TEST nvmf_bdevio 00:14:05.080 ************************************ 00:14:05.080 19:08:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:05.080 19:08:44 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:05.080 19:08:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:05.080 19:08:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.080 19:08:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.080 ************************************ 00:14:05.080 START TEST nvmf_auth_target 00:14:05.080 ************************************ 00:14:05.080 19:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:05.080 * Looking for test storage... 00:14:05.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.080 19:08:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:05.081 19:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:06.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:06.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.457 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.716 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.716 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.716 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.716 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.716 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:06.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:06.716 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.716 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:06.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.717 19:08:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:14:06.717 00:14:06.717 --- 10.0.0.2 ping statistics --- 00:14:06.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.717 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:14:06.717 00:14:06.717 --- 10.0.0.1 ping statistics --- 00:14:06.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.717 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3296357 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3296357 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3296357 ']' 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.717 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.976 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.976 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:06.976 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.976 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.976 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3296492 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2aa3ed0bfef41d9a8b2950a76a3f1860c8f26a5fa1b7cbee 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vIb 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2aa3ed0bfef41d9a8b2950a76a3f1860c8f26a5fa1b7cbee 0 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2aa3ed0bfef41d9a8b2950a76a3f1860c8f26a5fa1b7cbee 0 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2aa3ed0bfef41d9a8b2950a76a3f1860c8f26a5fa1b7cbee 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vIb 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vIb 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.vIb 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=26f58e25551fdd9f592b028c311a000dc35a8d804f419117fe4a07e469db419b 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.34s 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 26f58e25551fdd9f592b028c311a000dc35a8d804f419117fe4a07e469db419b 3 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 26f58e25551fdd9f592b028c311a000dc35a8d804f419117fe4a07e469db419b 3 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=26f58e25551fdd9f592b028c311a000dc35a8d804f419117fe4a07e469db419b 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.34s 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.34s 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.34s 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=23194a35f09a2b7849d928fb37ea9bd6 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rMr 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 23194a35f09a2b7849d928fb37ea9bd6 1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 23194a35f09a2b7849d928fb37ea9bd6 1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=23194a35f09a2b7849d928fb37ea9bd6 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rMr 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rMr 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.rMr 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a849aef1a87a4f61fcd864f37e24ad20f37c49533421df21 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vMN 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a849aef1a87a4f61fcd864f37e24ad20f37c49533421df21 2 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a849aef1a87a4f61fcd864f37e24ad20f37c49533421df21 2 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a849aef1a87a4f61fcd864f37e24ad20f37c49533421df21 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vMN 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vMN 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.vMN 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c72daf0d19adf9c4f573c8dd631f6de99da7e15ce7088614 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ouj 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c72daf0d19adf9c4f573c8dd631f6de99da7e15ce7088614 2 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c72daf0d19adf9c4f573c8dd631f6de99da7e15ce7088614 2 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c72daf0d19adf9c4f573c8dd631f6de99da7e15ce7088614 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ouj 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ouj 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ouj 00:14:07.264 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae1fab07b6b2f0d8e87306748e683607 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aPD 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae1fab07b6b2f0d8e87306748e683607 1 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae1fab07b6b2f0d8e87306748e683607 1 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae1fab07b6b2f0d8e87306748e683607 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:07.265 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aPD 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aPD 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.aPD 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=31716a2b480f1a9b13189dd6b73c85ef714ef696f094aec568327d15540e096b 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.e0s 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 31716a2b480f1a9b13189dd6b73c85ef714ef696f094aec568327d15540e096b 3 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 31716a2b480f1a9b13189dd6b73c85ef714ef696f094aec568327d15540e096b 3 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=31716a2b480f1a9b13189dd6b73c85ef714ef696f094aec568327d15540e096b 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.e0s 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.e0s 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.e0s 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3296357 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3296357 ']' 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.524 19:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.782 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.782 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:07.782 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3296492 /var/tmp/host.sock 00:14:07.782 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3296492 ']' 00:14:07.782 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:07.782 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.783 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:07.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:07.783 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.783 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vIb 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vIb 00:14:08.040 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vIb 00:14:08.297 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.34s ]] 00:14:08.297 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.34s 00:14:08.297 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.297 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.297 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.297 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.34s 00:14:08.297 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.34s 00:14:08.554 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:08.554 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rMr 00:14:08.554 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.554 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.554 19:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.554 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rMr 00:14:08.554 19:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rMr 00:14:08.812 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.vMN ]] 00:14:08.812 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vMN 00:14:08.812 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.812 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.812 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.812 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vMN 00:14:08.812 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vMN 00:14:09.070 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:09.070 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ouj 00:14:09.070 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.070 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.070 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.070 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ouj 00:14:09.070 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ouj 00:14:09.328 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.aPD ]] 00:14:09.328 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aPD 00:14:09.328 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.328 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.328 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.328 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aPD 00:14:09.328 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aPD 00:14:09.586 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:09.586 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.e0s 00:14:09.586 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.586 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.586 19:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.586 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.e0s 00:14:09.586 19:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.e0s 00:14:09.844 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:09.844 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:09.844 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:09.844 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.844 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.844 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.102 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.373 00:14:10.373 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.373 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.373 19:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.631 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.631 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.631 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.631 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.631 19:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.631 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.631 { 00:14:10.631 "cntlid": 1, 00:14:10.631 "qid": 0, 00:14:10.631 "state": "enabled", 00:14:10.631 "thread": "nvmf_tgt_poll_group_000", 00:14:10.631 "listen_address": { 00:14:10.631 "trtype": "TCP", 00:14:10.631 "adrfam": "IPv4", 00:14:10.631 "traddr": "10.0.0.2", 00:14:10.631 "trsvcid": "4420" 00:14:10.631 }, 00:14:10.631 "peer_address": { 00:14:10.631 "trtype": "TCP", 00:14:10.631 "adrfam": "IPv4", 00:14:10.631 "traddr": "10.0.0.1", 00:14:10.631 "trsvcid": "36572" 00:14:10.631 }, 00:14:10.631 "auth": { 00:14:10.631 "state": "completed", 00:14:10.631 "digest": "sha256", 00:14:10.631 "dhgroup": "null" 00:14:10.631 } 00:14:10.631 } 00:14:10.631 ]' 00:14:10.631 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.890 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.890 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.890 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:10.890 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.890 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.890 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.890 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.149 19:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.118 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.376 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.634 00:14:12.634 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.634 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.634 19:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.893 { 00:14:12.893 "cntlid": 3, 00:14:12.893 "qid": 0, 00:14:12.893 "state": "enabled", 00:14:12.893 "thread": "nvmf_tgt_poll_group_000", 00:14:12.893 "listen_address": { 00:14:12.893 "trtype": "TCP", 00:14:12.893 "adrfam": "IPv4", 00:14:12.893 "traddr": "10.0.0.2", 00:14:12.893 "trsvcid": "4420" 00:14:12.893 }, 00:14:12.893 "peer_address": { 00:14:12.893 "trtype": "TCP", 00:14:12.893 "adrfam": "IPv4", 00:14:12.893 "traddr": "10.0.0.1", 00:14:12.893 "trsvcid": "51230" 00:14:12.893 }, 00:14:12.893 "auth": { 00:14:12.893 "state": "completed", 00:14:12.893 "digest": "sha256", 00:14:12.893 "dhgroup": "null" 00:14:12.893 } 00:14:12.893 } 00:14:12.893 ]' 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.893 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.150 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:13.150 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.150 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.150 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.150 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.408 19:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.343 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.601 19:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.860 00:14:14.860 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.860 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.860 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.118 { 00:14:15.118 "cntlid": 5, 00:14:15.118 "qid": 0, 00:14:15.118 "state": "enabled", 00:14:15.118 "thread": "nvmf_tgt_poll_group_000", 00:14:15.118 "listen_address": { 00:14:15.118 "trtype": "TCP", 00:14:15.118 "adrfam": "IPv4", 00:14:15.118 "traddr": "10.0.0.2", 00:14:15.118 "trsvcid": "4420" 00:14:15.118 }, 00:14:15.118 "peer_address": { 00:14:15.118 "trtype": "TCP", 00:14:15.118 "adrfam": "IPv4", 00:14:15.118 "traddr": "10.0.0.1", 00:14:15.118 "trsvcid": "51266" 00:14:15.118 }, 00:14:15.118 "auth": { 00:14:15.118 "state": "completed", 00:14:15.118 "digest": "sha256", 00:14:15.118 "dhgroup": "null" 00:14:15.118 } 00:14:15.118 } 00:14:15.118 ]' 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.118 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.376 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:15.376 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.376 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.376 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.376 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.634 19:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:14:16.571 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.571 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.571 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.571 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.571 19:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.571 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.572 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.572 19:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.830 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.088 00:14:17.088 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.088 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.088 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.346 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.346 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.346 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.346 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.346 19:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.346 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.346 { 00:14:17.346 "cntlid": 7, 00:14:17.346 "qid": 0, 00:14:17.346 "state": "enabled", 00:14:17.346 "thread": "nvmf_tgt_poll_group_000", 00:14:17.346 "listen_address": { 00:14:17.346 "trtype": "TCP", 00:14:17.346 "adrfam": "IPv4", 00:14:17.346 "traddr": "10.0.0.2", 00:14:17.346 "trsvcid": "4420" 00:14:17.346 }, 00:14:17.346 "peer_address": { 00:14:17.346 "trtype": "TCP", 00:14:17.346 "adrfam": "IPv4", 00:14:17.346 "traddr": "10.0.0.1", 00:14:17.346 "trsvcid": "51294" 00:14:17.346 }, 00:14:17.346 "auth": { 00:14:17.346 "state": "completed", 00:14:17.346 "digest": "sha256", 00:14:17.346 "dhgroup": "null" 00:14:17.346 } 00:14:17.346 } 00:14:17.346 ]' 00:14:17.346 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.604 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.604 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.604 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:17.604 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.604 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.604 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.604 19:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.862 19:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:18.799 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.057 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.315 00:14:19.315 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.315 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.315 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.573 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.573 19:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.573 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.573 19:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.573 19:09:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.831 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.831 { 00:14:19.831 "cntlid": 9, 00:14:19.831 "qid": 0, 00:14:19.831 "state": "enabled", 00:14:19.831 "thread": "nvmf_tgt_poll_group_000", 00:14:19.831 "listen_address": { 00:14:19.831 "trtype": "TCP", 00:14:19.831 "adrfam": "IPv4", 00:14:19.832 "traddr": "10.0.0.2", 00:14:19.832 "trsvcid": "4420" 00:14:19.832 }, 00:14:19.832 "peer_address": { 00:14:19.832 "trtype": "TCP", 00:14:19.832 "adrfam": "IPv4", 00:14:19.832 "traddr": "10.0.0.1", 00:14:19.832 "trsvcid": "51328" 00:14:19.832 }, 00:14:19.832 "auth": { 00:14:19.832 "state": "completed", 00:14:19.832 "digest": "sha256", 00:14:19.832 "dhgroup": "ffdhe2048" 00:14:19.832 } 00:14:19.832 } 00:14:19.832 ]' 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.832 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.090 19:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.022 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.280 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.538 00:14:21.538 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.538 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.538 19:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.795 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.795 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.795 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.795 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.795 19:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.795 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.795 { 00:14:21.795 "cntlid": 11, 00:14:21.795 "qid": 0, 00:14:21.795 "state": "enabled", 00:14:21.795 "thread": "nvmf_tgt_poll_group_000", 00:14:21.795 "listen_address": { 00:14:21.795 "trtype": "TCP", 00:14:21.795 "adrfam": "IPv4", 00:14:21.795 "traddr": "10.0.0.2", 00:14:21.795 "trsvcid": "4420" 00:14:21.795 }, 00:14:21.795 "peer_address": { 00:14:21.795 "trtype": "TCP", 00:14:21.795 "adrfam": "IPv4", 00:14:21.795 "traddr": "10.0.0.1", 00:14:21.795 "trsvcid": "58918" 00:14:21.795 }, 00:14:21.795 "auth": { 00:14:21.795 "state": "completed", 00:14:21.795 "digest": "sha256", 00:14:21.795 "dhgroup": "ffdhe2048" 00:14:21.795 } 00:14:21.795 } 00:14:21.795 ]' 00:14:21.795 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.053 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.053 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.053 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.053 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.053 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.053 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.053 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.310 19:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.246 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.504 19:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.763 00:14:23.763 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.763 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.763 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.021 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.021 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.021 19:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.021 19:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.021 19:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.021 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.021 { 00:14:24.021 "cntlid": 13, 00:14:24.021 "qid": 0, 00:14:24.021 "state": "enabled", 00:14:24.021 "thread": "nvmf_tgt_poll_group_000", 00:14:24.021 "listen_address": { 00:14:24.021 "trtype": "TCP", 00:14:24.021 "adrfam": "IPv4", 00:14:24.021 "traddr": "10.0.0.2", 00:14:24.021 "trsvcid": "4420" 00:14:24.021 }, 00:14:24.021 "peer_address": { 00:14:24.021 "trtype": "TCP", 00:14:24.021 "adrfam": "IPv4", 00:14:24.021 "traddr": "10.0.0.1", 00:14:24.021 "trsvcid": "58944" 00:14:24.021 }, 00:14:24.021 "auth": { 00:14:24.021 "state": "completed", 00:14:24.021 "digest": "sha256", 00:14:24.021 "dhgroup": "ffdhe2048" 00:14:24.021 } 00:14:24.021 } 00:14:24.021 ]' 00:14:24.021 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.279 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.279 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.279 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.279 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.279 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.279 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.279 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.536 19:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.514 19:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:25.772 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.030 00:14:26.030 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.030 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.030 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.288 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.288 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.288 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.288 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.288 19:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.288 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.288 { 00:14:26.288 "cntlid": 15, 00:14:26.288 "qid": 0, 00:14:26.288 "state": "enabled", 00:14:26.289 "thread": "nvmf_tgt_poll_group_000", 00:14:26.289 "listen_address": { 00:14:26.289 "trtype": "TCP", 00:14:26.289 "adrfam": "IPv4", 00:14:26.289 "traddr": "10.0.0.2", 00:14:26.289 "trsvcid": "4420" 00:14:26.289 }, 00:14:26.289 "peer_address": { 00:14:26.289 "trtype": "TCP", 00:14:26.289 "adrfam": "IPv4", 00:14:26.289 "traddr": "10.0.0.1", 00:14:26.289 "trsvcid": "58966" 00:14:26.289 }, 00:14:26.289 "auth": { 00:14:26.289 "state": "completed", 00:14:26.289 "digest": "sha256", 00:14:26.289 "dhgroup": "ffdhe2048" 00:14:26.289 } 00:14:26.289 } 00:14:26.289 ]' 00:14:26.289 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.289 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.289 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.546 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.547 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.547 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.547 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.547 19:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.805 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.738 19:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.995 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.253 00:14:28.253 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.253 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.253 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.511 { 00:14:28.511 "cntlid": 17, 00:14:28.511 "qid": 0, 00:14:28.511 "state": "enabled", 00:14:28.511 "thread": "nvmf_tgt_poll_group_000", 00:14:28.511 "listen_address": { 00:14:28.511 "trtype": "TCP", 00:14:28.511 "adrfam": "IPv4", 00:14:28.511 "traddr": "10.0.0.2", 00:14:28.511 "trsvcid": "4420" 00:14:28.511 }, 00:14:28.511 "peer_address": { 00:14:28.511 "trtype": "TCP", 00:14:28.511 "adrfam": "IPv4", 00:14:28.511 "traddr": "10.0.0.1", 00:14:28.511 "trsvcid": "58992" 00:14:28.511 }, 00:14:28.511 "auth": { 00:14:28.511 "state": "completed", 00:14:28.511 "digest": "sha256", 00:14:28.511 "dhgroup": "ffdhe3072" 00:14:28.511 } 00:14:28.511 } 00:14:28.511 ]' 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.511 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.768 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.768 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.768 19:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.027 19:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.957 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.214 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.472 00:14:30.472 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.472 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.472 19:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.730 { 00:14:30.730 "cntlid": 19, 00:14:30.730 "qid": 0, 00:14:30.730 "state": "enabled", 00:14:30.730 "thread": "nvmf_tgt_poll_group_000", 00:14:30.730 "listen_address": { 00:14:30.730 "trtype": "TCP", 00:14:30.730 "adrfam": "IPv4", 00:14:30.730 "traddr": "10.0.0.2", 00:14:30.730 "trsvcid": "4420" 00:14:30.730 }, 00:14:30.730 "peer_address": { 00:14:30.730 "trtype": "TCP", 00:14:30.730 "adrfam": "IPv4", 00:14:30.730 "traddr": "10.0.0.1", 00:14:30.730 "trsvcid": "59026" 00:14:30.730 }, 00:14:30.730 "auth": { 00:14:30.730 "state": "completed", 00:14:30.730 "digest": "sha256", 00:14:30.730 "dhgroup": "ffdhe3072" 00:14:30.730 } 00:14:30.730 } 00:14:30.730 ]' 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.730 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.989 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.989 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.989 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.248 19:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:14:32.180 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.181 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:32.181 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.181 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.181 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.181 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.181 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.181 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.437 19:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.693 00:14:32.693 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.693 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.693 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.951 { 00:14:32.951 "cntlid": 21, 00:14:32.951 "qid": 0, 00:14:32.951 "state": "enabled", 00:14:32.951 "thread": "nvmf_tgt_poll_group_000", 00:14:32.951 "listen_address": { 00:14:32.951 "trtype": "TCP", 00:14:32.951 "adrfam": "IPv4", 00:14:32.951 "traddr": "10.0.0.2", 00:14:32.951 "trsvcid": "4420" 00:14:32.951 }, 00:14:32.951 "peer_address": { 00:14:32.951 "trtype": "TCP", 00:14:32.951 "adrfam": "IPv4", 00:14:32.951 "traddr": "10.0.0.1", 00:14:32.951 "trsvcid": "34912" 00:14:32.951 }, 00:14:32.951 "auth": { 00:14:32.951 "state": "completed", 00:14:32.951 "digest": "sha256", 00:14:32.951 "dhgroup": "ffdhe3072" 00:14:32.951 } 00:14:32.951 } 00:14:32.951 ]' 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.951 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.209 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.209 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.209 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.468 19:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.408 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.666 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.667 19:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.924 00:14:34.924 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.924 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.924 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.193 { 00:14:35.193 "cntlid": 23, 00:14:35.193 "qid": 0, 00:14:35.193 "state": "enabled", 00:14:35.193 "thread": "nvmf_tgt_poll_group_000", 00:14:35.193 "listen_address": { 00:14:35.193 "trtype": "TCP", 00:14:35.193 "adrfam": "IPv4", 00:14:35.193 "traddr": "10.0.0.2", 00:14:35.193 "trsvcid": "4420" 00:14:35.193 }, 00:14:35.193 "peer_address": { 00:14:35.193 "trtype": "TCP", 00:14:35.193 "adrfam": "IPv4", 00:14:35.193 "traddr": "10.0.0.1", 00:14:35.193 "trsvcid": "34950" 00:14:35.193 }, 00:14:35.193 "auth": { 00:14:35.193 "state": "completed", 00:14:35.193 "digest": "sha256", 00:14:35.193 "dhgroup": "ffdhe3072" 00:14:35.193 } 00:14:35.193 } 00:14:35.193 ]' 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.193 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.456 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.456 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.456 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.456 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.456 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.712 19:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:14:36.646 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.647 19:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.904 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.161 00:14:37.161 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.161 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.161 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.419 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.419 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.419 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.419 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.419 19:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.419 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.419 { 00:14:37.419 "cntlid": 25, 00:14:37.419 "qid": 0, 00:14:37.419 "state": "enabled", 00:14:37.419 "thread": "nvmf_tgt_poll_group_000", 00:14:37.419 "listen_address": { 00:14:37.419 "trtype": "TCP", 00:14:37.419 "adrfam": "IPv4", 00:14:37.419 "traddr": "10.0.0.2", 00:14:37.419 "trsvcid": "4420" 00:14:37.419 }, 00:14:37.419 "peer_address": { 00:14:37.419 "trtype": "TCP", 00:14:37.419 "adrfam": "IPv4", 00:14:37.419 "traddr": "10.0.0.1", 00:14:37.419 "trsvcid": "34986" 00:14:37.419 }, 00:14:37.419 "auth": { 00:14:37.419 "state": "completed", 00:14:37.419 "digest": "sha256", 00:14:37.419 "dhgroup": "ffdhe4096" 00:14:37.419 } 00:14:37.419 } 00:14:37.419 ]' 00:14:37.419 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.676 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.676 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.676 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.676 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.676 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.676 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.676 19:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.934 19:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.911 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.169 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.427 00:14:39.427 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.427 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.427 19:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.685 { 00:14:39.685 "cntlid": 27, 00:14:39.685 "qid": 0, 00:14:39.685 "state": "enabled", 00:14:39.685 "thread": "nvmf_tgt_poll_group_000", 00:14:39.685 "listen_address": { 00:14:39.685 "trtype": "TCP", 00:14:39.685 "adrfam": "IPv4", 00:14:39.685 "traddr": "10.0.0.2", 00:14:39.685 "trsvcid": "4420" 00:14:39.685 }, 00:14:39.685 "peer_address": { 00:14:39.685 "trtype": "TCP", 00:14:39.685 "adrfam": "IPv4", 00:14:39.685 "traddr": "10.0.0.1", 00:14:39.685 "trsvcid": "35024" 00:14:39.685 }, 00:14:39.685 "auth": { 00:14:39.685 "state": "completed", 00:14:39.685 "digest": "sha256", 00:14:39.685 "dhgroup": "ffdhe4096" 00:14:39.685 } 00:14:39.685 } 00:14:39.685 ]' 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.685 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.943 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.943 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.943 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.943 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.943 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.201 19:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.137 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.395 19:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.652 00:14:41.652 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.652 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.652 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.910 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.910 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.910 19:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.910 19:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.910 19:09:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.910 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.910 { 00:14:41.910 "cntlid": 29, 00:14:41.910 "qid": 0, 00:14:41.910 "state": "enabled", 00:14:41.910 "thread": "nvmf_tgt_poll_group_000", 00:14:41.910 "listen_address": { 00:14:41.910 "trtype": "TCP", 00:14:41.910 "adrfam": "IPv4", 00:14:41.910 "traddr": "10.0.0.2", 00:14:41.910 "trsvcid": "4420" 00:14:41.910 }, 00:14:41.910 "peer_address": { 00:14:41.910 "trtype": "TCP", 00:14:41.910 "adrfam": "IPv4", 00:14:41.910 "traddr": "10.0.0.1", 00:14:41.910 "trsvcid": "38808" 00:14:41.910 }, 00:14:41.910 "auth": { 00:14:41.910 "state": "completed", 00:14:41.910 "digest": "sha256", 00:14:41.910 "dhgroup": "ffdhe4096" 00:14:41.910 } 00:14:41.910 } 00:14:41.910 ]' 00:14:41.910 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.168 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.168 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.168 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.168 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.168 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.168 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.168 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.425 19:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.360 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.618 19:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:44.183 00:14:44.183 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.183 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.183 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.183 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.184 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.184 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.184 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.184 19:09:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.184 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.184 { 00:14:44.184 "cntlid": 31, 00:14:44.184 "qid": 0, 00:14:44.184 "state": "enabled", 00:14:44.184 "thread": "nvmf_tgt_poll_group_000", 00:14:44.184 "listen_address": { 00:14:44.184 "trtype": "TCP", 00:14:44.184 "adrfam": "IPv4", 00:14:44.184 "traddr": "10.0.0.2", 00:14:44.184 "trsvcid": "4420" 00:14:44.184 }, 00:14:44.184 "peer_address": { 00:14:44.184 "trtype": "TCP", 00:14:44.184 "adrfam": "IPv4", 00:14:44.184 "traddr": "10.0.0.1", 00:14:44.184 "trsvcid": "38826" 00:14:44.184 }, 00:14:44.184 "auth": { 00:14:44.184 "state": "completed", 00:14:44.184 "digest": "sha256", 00:14:44.184 "dhgroup": "ffdhe4096" 00:14:44.184 } 00:14:44.184 } 00:14:44.184 ]' 00:14:44.184 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.442 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.442 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.442 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.442 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.442 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.442 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.442 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.699 19:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.634 19:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.892 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.459 00:14:46.459 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.459 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.459 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.788 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.788 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.788 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.788 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.788 19:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.788 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.788 { 00:14:46.788 "cntlid": 33, 00:14:46.788 "qid": 0, 00:14:46.788 "state": "enabled", 00:14:46.788 "thread": "nvmf_tgt_poll_group_000", 00:14:46.788 "listen_address": { 00:14:46.788 "trtype": "TCP", 00:14:46.788 "adrfam": "IPv4", 00:14:46.788 "traddr": "10.0.0.2", 00:14:46.788 "trsvcid": "4420" 00:14:46.788 }, 00:14:46.788 "peer_address": { 00:14:46.788 "trtype": "TCP", 00:14:46.788 "adrfam": "IPv4", 00:14:46.788 "traddr": "10.0.0.1", 00:14:46.788 "trsvcid": "38836" 00:14:46.788 }, 00:14:46.788 "auth": { 00:14:46.788 "state": "completed", 00:14:46.788 "digest": "sha256", 00:14:46.788 "dhgroup": "ffdhe6144" 00:14:46.788 } 00:14:46.788 } 00:14:46.788 ]' 00:14:46.788 19:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.788 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.788 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.788 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.788 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.788 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.788 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.788 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.046 19:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:47.984 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.242 19:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.810 00:14:48.810 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.810 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.810 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.068 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.068 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.068 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.068 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.068 19:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.068 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.068 { 00:14:49.068 "cntlid": 35, 00:14:49.068 "qid": 0, 00:14:49.068 "state": "enabled", 00:14:49.068 "thread": "nvmf_tgt_poll_group_000", 00:14:49.068 "listen_address": { 00:14:49.068 "trtype": "TCP", 00:14:49.068 "adrfam": "IPv4", 00:14:49.068 "traddr": "10.0.0.2", 00:14:49.068 "trsvcid": "4420" 00:14:49.068 }, 00:14:49.068 "peer_address": { 00:14:49.068 "trtype": "TCP", 00:14:49.068 "adrfam": "IPv4", 00:14:49.068 "traddr": "10.0.0.1", 00:14:49.068 "trsvcid": "38866" 00:14:49.068 }, 00:14:49.068 "auth": { 00:14:49.068 "state": "completed", 00:14:49.068 "digest": "sha256", 00:14:49.068 "dhgroup": "ffdhe6144" 00:14:49.068 } 00:14:49.068 } 00:14:49.068 ]' 00:14:49.068 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.326 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.326 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.326 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.326 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.326 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.326 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.326 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.585 19:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.522 19:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.781 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.348 00:14:51.348 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.348 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.348 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.607 { 00:14:51.607 "cntlid": 37, 00:14:51.607 "qid": 0, 00:14:51.607 "state": "enabled", 00:14:51.607 "thread": "nvmf_tgt_poll_group_000", 00:14:51.607 "listen_address": { 00:14:51.607 "trtype": "TCP", 00:14:51.607 "adrfam": "IPv4", 00:14:51.607 "traddr": "10.0.0.2", 00:14:51.607 "trsvcid": "4420" 00:14:51.607 }, 00:14:51.607 "peer_address": { 00:14:51.607 "trtype": "TCP", 00:14:51.607 "adrfam": "IPv4", 00:14:51.607 "traddr": "10.0.0.1", 00:14:51.607 "trsvcid": "47734" 00:14:51.607 }, 00:14:51.607 "auth": { 00:14:51.607 "state": "completed", 00:14:51.607 "digest": "sha256", 00:14:51.607 "dhgroup": "ffdhe6144" 00:14:51.607 } 00:14:51.607 } 00:14:51.607 ]' 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.607 19:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.607 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.607 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.865 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.865 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.865 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.865 19:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.240 19:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.808 00:14:53.808 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.808 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.808 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.066 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.066 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.067 { 00:14:54.067 "cntlid": 39, 00:14:54.067 "qid": 0, 00:14:54.067 "state": "enabled", 00:14:54.067 "thread": "nvmf_tgt_poll_group_000", 00:14:54.067 "listen_address": { 00:14:54.067 "trtype": "TCP", 00:14:54.067 "adrfam": "IPv4", 00:14:54.067 "traddr": "10.0.0.2", 00:14:54.067 "trsvcid": "4420" 00:14:54.067 }, 00:14:54.067 "peer_address": { 00:14:54.067 "trtype": "TCP", 00:14:54.067 "adrfam": "IPv4", 00:14:54.067 "traddr": "10.0.0.1", 00:14:54.067 "trsvcid": "47752" 00:14:54.067 }, 00:14:54.067 "auth": { 00:14:54.067 "state": "completed", 00:14:54.067 "digest": "sha256", 00:14:54.067 "dhgroup": "ffdhe6144" 00:14:54.067 } 00:14:54.067 } 00:14:54.067 ]' 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.067 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.325 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.325 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.325 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.584 19:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.521 19:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.779 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.716 00:14:56.716 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.716 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.716 19:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.974 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.974 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.974 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.974 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.974 19:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.974 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.974 { 00:14:56.974 "cntlid": 41, 00:14:56.974 "qid": 0, 00:14:56.974 "state": "enabled", 00:14:56.974 "thread": "nvmf_tgt_poll_group_000", 00:14:56.974 "listen_address": { 00:14:56.975 "trtype": "TCP", 00:14:56.975 "adrfam": "IPv4", 00:14:56.975 "traddr": "10.0.0.2", 00:14:56.975 "trsvcid": "4420" 00:14:56.975 }, 00:14:56.975 "peer_address": { 00:14:56.975 "trtype": "TCP", 00:14:56.975 "adrfam": "IPv4", 00:14:56.975 "traddr": "10.0.0.1", 00:14:56.975 "trsvcid": "47786" 00:14:56.975 }, 00:14:56.975 "auth": { 00:14:56.975 "state": "completed", 00:14:56.975 "digest": "sha256", 00:14:56.975 "dhgroup": "ffdhe8192" 00:14:56.975 } 00:14:56.975 } 00:14:56.975 ]' 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.975 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.233 19:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.170 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.738 19:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.674 00:14:59.674 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.674 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.674 19:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.674 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.674 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.674 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.674 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.674 19:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.674 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.674 { 00:14:59.674 "cntlid": 43, 00:14:59.674 "qid": 0, 00:14:59.674 "state": "enabled", 00:14:59.674 "thread": "nvmf_tgt_poll_group_000", 00:14:59.674 "listen_address": { 00:14:59.674 "trtype": "TCP", 00:14:59.674 "adrfam": "IPv4", 00:14:59.674 "traddr": "10.0.0.2", 00:14:59.674 "trsvcid": "4420" 00:14:59.674 }, 00:14:59.674 "peer_address": { 00:14:59.674 "trtype": "TCP", 00:14:59.674 "adrfam": "IPv4", 00:14:59.674 "traddr": "10.0.0.1", 00:14:59.674 "trsvcid": "47818" 00:14:59.674 }, 00:14:59.674 "auth": { 00:14:59.674 "state": "completed", 00:14:59.674 "digest": "sha256", 00:14:59.674 "dhgroup": "ffdhe8192" 00:14:59.674 } 00:14:59.674 } 00:14:59.674 ]' 00:14:59.674 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.932 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.932 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.932 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.932 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.932 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.932 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.932 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.190 19:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.125 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.384 19:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.321 00:15:02.321 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.321 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.321 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.579 { 00:15:02.579 "cntlid": 45, 00:15:02.579 "qid": 0, 00:15:02.579 "state": "enabled", 00:15:02.579 "thread": "nvmf_tgt_poll_group_000", 00:15:02.579 "listen_address": { 00:15:02.579 "trtype": "TCP", 00:15:02.579 "adrfam": "IPv4", 00:15:02.579 "traddr": "10.0.0.2", 00:15:02.579 "trsvcid": "4420" 00:15:02.579 }, 00:15:02.579 "peer_address": { 00:15:02.579 "trtype": "TCP", 00:15:02.579 "adrfam": "IPv4", 00:15:02.579 "traddr": "10.0.0.1", 00:15:02.579 "trsvcid": "41714" 00:15:02.579 }, 00:15:02.579 "auth": { 00:15:02.579 "state": "completed", 00:15:02.579 "digest": "sha256", 00:15:02.579 "dhgroup": "ffdhe8192" 00:15:02.579 } 00:15:02.579 } 00:15:02.579 ]' 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.579 19:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.837 19:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:15:03.774 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.032 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.032 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.032 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.032 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.032 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.032 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.032 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:04.291 19:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.230 00:15:05.230 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.230 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.230 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.487 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.487 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.487 19:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.487 19:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.487 19:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.487 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.487 { 00:15:05.487 "cntlid": 47, 00:15:05.487 "qid": 0, 00:15:05.487 "state": "enabled", 00:15:05.487 "thread": "nvmf_tgt_poll_group_000", 00:15:05.487 "listen_address": { 00:15:05.487 "trtype": "TCP", 00:15:05.487 "adrfam": "IPv4", 00:15:05.487 "traddr": "10.0.0.2", 00:15:05.487 "trsvcid": "4420" 00:15:05.487 }, 00:15:05.487 "peer_address": { 00:15:05.487 "trtype": "TCP", 00:15:05.487 "adrfam": "IPv4", 00:15:05.487 "traddr": "10.0.0.1", 00:15:05.487 "trsvcid": "41744" 00:15:05.487 }, 00:15:05.487 "auth": { 00:15:05.487 "state": "completed", 00:15:05.487 "digest": "sha256", 00:15:05.487 "dhgroup": "ffdhe8192" 00:15:05.487 } 00:15:05.487 } 00:15:05.487 ]' 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.488 19:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.747 19:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:15:06.689 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.949 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.224 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.224 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.224 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.487 00:15:07.487 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.487 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.487 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.743 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.744 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.744 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.744 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.744 19:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.744 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.744 { 00:15:07.744 "cntlid": 49, 00:15:07.744 "qid": 0, 00:15:07.744 "state": "enabled", 00:15:07.744 "thread": "nvmf_tgt_poll_group_000", 00:15:07.744 "listen_address": { 00:15:07.744 "trtype": "TCP", 00:15:07.744 "adrfam": "IPv4", 00:15:07.744 "traddr": "10.0.0.2", 00:15:07.744 "trsvcid": "4420" 00:15:07.744 }, 00:15:07.744 "peer_address": { 00:15:07.744 "trtype": "TCP", 00:15:07.744 "adrfam": "IPv4", 00:15:07.744 "traddr": "10.0.0.1", 00:15:07.744 "trsvcid": "41774" 00:15:07.744 }, 00:15:07.744 "auth": { 00:15:07.744 "state": "completed", 00:15:07.744 "digest": "sha384", 00:15:07.744 "dhgroup": "null" 00:15:07.744 } 00:15:07.744 } 00:15:07.744 ]' 00:15:07.744 19:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.744 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.744 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.744 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:07.744 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.744 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.744 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.744 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.002 19:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.935 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.502 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.761 00:15:09.761 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.761 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.761 19:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.017 { 00:15:10.017 "cntlid": 51, 00:15:10.017 "qid": 0, 00:15:10.017 "state": "enabled", 00:15:10.017 "thread": "nvmf_tgt_poll_group_000", 00:15:10.017 "listen_address": { 00:15:10.017 "trtype": "TCP", 00:15:10.017 "adrfam": "IPv4", 00:15:10.017 "traddr": "10.0.0.2", 00:15:10.017 "trsvcid": "4420" 00:15:10.017 }, 00:15:10.017 "peer_address": { 00:15:10.017 "trtype": "TCP", 00:15:10.017 "adrfam": "IPv4", 00:15:10.017 "traddr": "10.0.0.1", 00:15:10.017 "trsvcid": "41794" 00:15:10.017 }, 00:15:10.017 "auth": { 00:15:10.017 "state": "completed", 00:15:10.017 "digest": "sha384", 00:15:10.017 "dhgroup": "null" 00:15:10.017 } 00:15:10.017 } 00:15:10.017 ]' 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.017 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.274 19:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.205 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.462 19:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.026 00:15:12.026 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.026 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.026 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.284 { 00:15:12.284 "cntlid": 53, 00:15:12.284 "qid": 0, 00:15:12.284 "state": "enabled", 00:15:12.284 "thread": "nvmf_tgt_poll_group_000", 00:15:12.284 "listen_address": { 00:15:12.284 "trtype": "TCP", 00:15:12.284 "adrfam": "IPv4", 00:15:12.284 "traddr": "10.0.0.2", 00:15:12.284 "trsvcid": "4420" 00:15:12.284 }, 00:15:12.284 "peer_address": { 00:15:12.284 "trtype": "TCP", 00:15:12.284 "adrfam": "IPv4", 00:15:12.284 "traddr": "10.0.0.1", 00:15:12.284 "trsvcid": "45696" 00:15:12.284 }, 00:15:12.284 "auth": { 00:15:12.284 "state": "completed", 00:15:12.284 "digest": "sha384", 00:15:12.284 "dhgroup": "null" 00:15:12.284 } 00:15:12.284 } 00:15:12.284 ]' 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.284 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.541 19:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.910 19:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.910 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.167 00:15:14.424 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.424 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.424 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.682 { 00:15:14.682 "cntlid": 55, 00:15:14.682 "qid": 0, 00:15:14.682 "state": "enabled", 00:15:14.682 "thread": "nvmf_tgt_poll_group_000", 00:15:14.682 "listen_address": { 00:15:14.682 "trtype": "TCP", 00:15:14.682 "adrfam": "IPv4", 00:15:14.682 "traddr": "10.0.0.2", 00:15:14.682 "trsvcid": "4420" 00:15:14.682 }, 00:15:14.682 "peer_address": { 00:15:14.682 "trtype": "TCP", 00:15:14.682 "adrfam": "IPv4", 00:15:14.682 "traddr": "10.0.0.1", 00:15:14.682 "trsvcid": "45740" 00:15:14.682 }, 00:15:14.682 "auth": { 00:15:14.682 "state": "completed", 00:15:14.682 "digest": "sha384", 00:15:14.682 "dhgroup": "null" 00:15:14.682 } 00:15:14.682 } 00:15:14.682 ]' 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.682 19:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.939 19:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.872 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.129 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.387 00:15:16.387 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.387 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.387 19:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.644 { 00:15:16.644 "cntlid": 57, 00:15:16.644 "qid": 0, 00:15:16.644 "state": "enabled", 00:15:16.644 "thread": "nvmf_tgt_poll_group_000", 00:15:16.644 "listen_address": { 00:15:16.644 "trtype": "TCP", 00:15:16.644 "adrfam": "IPv4", 00:15:16.644 "traddr": "10.0.0.2", 00:15:16.644 "trsvcid": "4420" 00:15:16.644 }, 00:15:16.644 "peer_address": { 00:15:16.644 "trtype": "TCP", 00:15:16.644 "adrfam": "IPv4", 00:15:16.644 "traddr": "10.0.0.1", 00:15:16.644 "trsvcid": "45774" 00:15:16.644 }, 00:15:16.644 "auth": { 00:15:16.644 "state": "completed", 00:15:16.644 "digest": "sha384", 00:15:16.644 "dhgroup": "ffdhe2048" 00:15:16.644 } 00:15:16.644 } 00:15:16.644 ]' 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.644 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.901 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.901 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.901 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.901 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.901 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.159 19:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.091 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.349 19:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.914 00:15:18.914 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.914 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.914 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.171 { 00:15:19.171 "cntlid": 59, 00:15:19.171 "qid": 0, 00:15:19.171 "state": "enabled", 00:15:19.171 "thread": "nvmf_tgt_poll_group_000", 00:15:19.171 "listen_address": { 00:15:19.171 "trtype": "TCP", 00:15:19.171 "adrfam": "IPv4", 00:15:19.171 "traddr": "10.0.0.2", 00:15:19.171 "trsvcid": "4420" 00:15:19.171 }, 00:15:19.171 "peer_address": { 00:15:19.171 "trtype": "TCP", 00:15:19.171 "adrfam": "IPv4", 00:15:19.171 "traddr": "10.0.0.1", 00:15:19.171 "trsvcid": "45796" 00:15:19.171 }, 00:15:19.171 "auth": { 00:15:19.171 "state": "completed", 00:15:19.171 "digest": "sha384", 00:15:19.171 "dhgroup": "ffdhe2048" 00:15:19.171 } 00:15:19.171 } 00:15:19.171 ]' 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.171 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.429 19:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.400 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.658 19:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.916 00:15:20.916 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.916 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.916 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.174 { 00:15:21.174 "cntlid": 61, 00:15:21.174 "qid": 0, 00:15:21.174 "state": "enabled", 00:15:21.174 "thread": "nvmf_tgt_poll_group_000", 00:15:21.174 "listen_address": { 00:15:21.174 "trtype": "TCP", 00:15:21.174 "adrfam": "IPv4", 00:15:21.174 "traddr": "10.0.0.2", 00:15:21.174 "trsvcid": "4420" 00:15:21.174 }, 00:15:21.174 "peer_address": { 00:15:21.174 "trtype": "TCP", 00:15:21.174 "adrfam": "IPv4", 00:15:21.174 "traddr": "10.0.0.1", 00:15:21.174 "trsvcid": "59306" 00:15:21.174 }, 00:15:21.174 "auth": { 00:15:21.174 "state": "completed", 00:15:21.174 "digest": "sha384", 00:15:21.174 "dhgroup": "ffdhe2048" 00:15:21.174 } 00:15:21.174 } 00:15:21.174 ]' 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.174 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.431 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.431 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.431 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.689 19:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.623 19:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.881 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.137 00:15:23.137 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.137 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.137 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.394 { 00:15:23.394 "cntlid": 63, 00:15:23.394 "qid": 0, 00:15:23.394 "state": "enabled", 00:15:23.394 "thread": "nvmf_tgt_poll_group_000", 00:15:23.394 "listen_address": { 00:15:23.394 "trtype": "TCP", 00:15:23.394 "adrfam": "IPv4", 00:15:23.394 "traddr": "10.0.0.2", 00:15:23.394 "trsvcid": "4420" 00:15:23.394 }, 00:15:23.394 "peer_address": { 00:15:23.394 "trtype": "TCP", 00:15:23.394 "adrfam": "IPv4", 00:15:23.394 "traddr": "10.0.0.1", 00:15:23.394 "trsvcid": "59322" 00:15:23.394 }, 00:15:23.394 "auth": { 00:15:23.394 "state": "completed", 00:15:23.394 "digest": "sha384", 00:15:23.394 "dhgroup": "ffdhe2048" 00:15:23.394 } 00:15:23.394 } 00:15:23.394 ]' 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.394 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.651 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.651 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.651 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.651 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.651 19:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.908 19:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.840 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.097 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.354 00:15:25.354 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.354 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.354 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.612 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.612 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.612 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.612 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.612 19:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.612 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.612 { 00:15:25.612 "cntlid": 65, 00:15:25.612 "qid": 0, 00:15:25.612 "state": "enabled", 00:15:25.612 "thread": "nvmf_tgt_poll_group_000", 00:15:25.612 "listen_address": { 00:15:25.612 "trtype": "TCP", 00:15:25.612 "adrfam": "IPv4", 00:15:25.612 "traddr": "10.0.0.2", 00:15:25.612 "trsvcid": "4420" 00:15:25.612 }, 00:15:25.612 "peer_address": { 00:15:25.612 "trtype": "TCP", 00:15:25.612 "adrfam": "IPv4", 00:15:25.612 "traddr": "10.0.0.1", 00:15:25.612 "trsvcid": "59354" 00:15:25.612 }, 00:15:25.612 "auth": { 00:15:25.612 "state": "completed", 00:15:25.612 "digest": "sha384", 00:15:25.612 "dhgroup": "ffdhe3072" 00:15:25.612 } 00:15:25.612 } 00:15:25.612 ]' 00:15:25.612 19:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.612 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.612 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.869 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.869 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.869 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.869 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.869 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.127 19:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.061 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.319 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.577 00:15:27.577 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.577 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.577 19:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.835 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.835 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.835 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.835 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.835 19:10:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.835 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.835 { 00:15:27.835 "cntlid": 67, 00:15:27.835 "qid": 0, 00:15:27.835 "state": "enabled", 00:15:27.835 "thread": "nvmf_tgt_poll_group_000", 00:15:27.835 "listen_address": { 00:15:27.835 "trtype": "TCP", 00:15:27.835 "adrfam": "IPv4", 00:15:27.835 "traddr": "10.0.0.2", 00:15:27.835 "trsvcid": "4420" 00:15:27.835 }, 00:15:27.835 "peer_address": { 00:15:27.835 "trtype": "TCP", 00:15:27.835 "adrfam": "IPv4", 00:15:27.835 "traddr": "10.0.0.1", 00:15:27.835 "trsvcid": "59382" 00:15:27.835 }, 00:15:27.835 "auth": { 00:15:27.835 "state": "completed", 00:15:27.835 "digest": "sha384", 00:15:27.835 "dhgroup": "ffdhe3072" 00:15:27.835 } 00:15:27.835 } 00:15:27.835 ]' 00:15:27.835 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.093 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.093 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.093 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:28.093 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.093 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.093 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.093 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.351 19:10:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.284 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.542 19:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.800 00:15:29.800 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.800 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.800 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.058 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.058 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.058 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.058 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.316 { 00:15:30.316 "cntlid": 69, 00:15:30.316 "qid": 0, 00:15:30.316 "state": "enabled", 00:15:30.316 "thread": "nvmf_tgt_poll_group_000", 00:15:30.316 "listen_address": { 00:15:30.316 "trtype": "TCP", 00:15:30.316 "adrfam": "IPv4", 00:15:30.316 "traddr": "10.0.0.2", 00:15:30.316 "trsvcid": "4420" 00:15:30.316 }, 00:15:30.316 "peer_address": { 00:15:30.316 "trtype": "TCP", 00:15:30.316 "adrfam": "IPv4", 00:15:30.316 "traddr": "10.0.0.1", 00:15:30.316 "trsvcid": "59406" 00:15:30.316 }, 00:15:30.316 "auth": { 00:15:30.316 "state": "completed", 00:15:30.316 "digest": "sha384", 00:15:30.316 "dhgroup": "ffdhe3072" 00:15:30.316 } 00:15:30.316 } 00:15:30.316 ]' 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.316 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.574 19:10:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.507 19:10:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.765 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.331 00:15:32.331 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.331 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.331 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.331 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.331 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.331 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.331 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.589 { 00:15:32.589 "cntlid": 71, 00:15:32.589 "qid": 0, 00:15:32.589 "state": "enabled", 00:15:32.589 "thread": "nvmf_tgt_poll_group_000", 00:15:32.589 "listen_address": { 00:15:32.589 "trtype": "TCP", 00:15:32.589 "adrfam": "IPv4", 00:15:32.589 "traddr": "10.0.0.2", 00:15:32.589 "trsvcid": "4420" 00:15:32.589 }, 00:15:32.589 "peer_address": { 00:15:32.589 "trtype": "TCP", 00:15:32.589 "adrfam": "IPv4", 00:15:32.589 "traddr": "10.0.0.1", 00:15:32.589 "trsvcid": "38390" 00:15:32.589 }, 00:15:32.589 "auth": { 00:15:32.589 "state": "completed", 00:15:32.589 "digest": "sha384", 00:15:32.589 "dhgroup": "ffdhe3072" 00:15:32.589 } 00:15:32.589 } 00:15:32.589 ]' 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.589 19:10:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.847 19:10:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.776 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.034 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.598 00:15:34.598 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.598 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.598 19:10:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.856 { 00:15:34.856 "cntlid": 73, 00:15:34.856 "qid": 0, 00:15:34.856 "state": "enabled", 00:15:34.856 "thread": "nvmf_tgt_poll_group_000", 00:15:34.856 "listen_address": { 00:15:34.856 "trtype": "TCP", 00:15:34.856 "adrfam": "IPv4", 00:15:34.856 "traddr": "10.0.0.2", 00:15:34.856 "trsvcid": "4420" 00:15:34.856 }, 00:15:34.856 "peer_address": { 00:15:34.856 "trtype": "TCP", 00:15:34.856 "adrfam": "IPv4", 00:15:34.856 "traddr": "10.0.0.1", 00:15:34.856 "trsvcid": "38412" 00:15:34.856 }, 00:15:34.856 "auth": { 00:15:34.856 "state": "completed", 00:15:34.856 "digest": "sha384", 00:15:34.856 "dhgroup": "ffdhe4096" 00:15:34.856 } 00:15:34.856 } 00:15:34.856 ]' 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.856 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.114 19:10:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.052 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.367 19:10:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.625 00:15:36.625 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.625 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.625 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.883 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.883 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.883 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.883 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.883 19:10:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.883 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.883 { 00:15:36.883 "cntlid": 75, 00:15:36.883 "qid": 0, 00:15:36.883 "state": "enabled", 00:15:36.883 "thread": "nvmf_tgt_poll_group_000", 00:15:36.883 "listen_address": { 00:15:36.883 "trtype": "TCP", 00:15:36.883 "adrfam": "IPv4", 00:15:36.883 "traddr": "10.0.0.2", 00:15:36.883 "trsvcid": "4420" 00:15:36.883 }, 00:15:36.883 "peer_address": { 00:15:36.883 "trtype": "TCP", 00:15:36.883 "adrfam": "IPv4", 00:15:36.883 "traddr": "10.0.0.1", 00:15:36.883 "trsvcid": "38428" 00:15:36.883 }, 00:15:36.883 "auth": { 00:15:36.883 "state": "completed", 00:15:36.883 "digest": "sha384", 00:15:36.883 "dhgroup": "ffdhe4096" 00:15:36.883 } 00:15:36.883 } 00:15:36.883 ]' 00:15:36.883 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.140 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.140 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.140 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:37.140 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.140 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.140 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.140 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.398 19:10:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.331 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.588 19:10:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.152 00:15:39.152 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.152 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.152 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.410 { 00:15:39.410 "cntlid": 77, 00:15:39.410 "qid": 0, 00:15:39.410 "state": "enabled", 00:15:39.410 "thread": "nvmf_tgt_poll_group_000", 00:15:39.410 "listen_address": { 00:15:39.410 "trtype": "TCP", 00:15:39.410 "adrfam": "IPv4", 00:15:39.410 "traddr": "10.0.0.2", 00:15:39.410 "trsvcid": "4420" 00:15:39.410 }, 00:15:39.410 "peer_address": { 00:15:39.410 "trtype": "TCP", 00:15:39.410 "adrfam": "IPv4", 00:15:39.410 "traddr": "10.0.0.1", 00:15:39.410 "trsvcid": "38450" 00:15:39.410 }, 00:15:39.410 "auth": { 00:15:39.410 "state": "completed", 00:15:39.410 "digest": "sha384", 00:15:39.410 "dhgroup": "ffdhe4096" 00:15:39.410 } 00:15:39.410 } 00:15:39.410 ]' 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.410 19:10:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.668 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:15:40.624 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.624 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.624 19:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.624 19:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.624 19:10:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.624 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.624 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.625 19:10:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.882 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.883 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.883 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.883 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.449 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.449 { 00:15:41.449 "cntlid": 79, 00:15:41.449 "qid": 0, 00:15:41.449 "state": "enabled", 00:15:41.449 "thread": "nvmf_tgt_poll_group_000", 00:15:41.449 "listen_address": { 00:15:41.449 "trtype": "TCP", 00:15:41.449 "adrfam": "IPv4", 00:15:41.449 "traddr": "10.0.0.2", 00:15:41.449 "trsvcid": "4420" 00:15:41.449 }, 00:15:41.449 "peer_address": { 00:15:41.449 "trtype": "TCP", 00:15:41.449 "adrfam": "IPv4", 00:15:41.449 "traddr": "10.0.0.1", 00:15:41.449 "trsvcid": "42918" 00:15:41.449 }, 00:15:41.449 "auth": { 00:15:41.449 "state": "completed", 00:15:41.449 "digest": "sha384", 00:15:41.449 "dhgroup": "ffdhe4096" 00:15:41.449 } 00:15:41.449 } 00:15:41.449 ]' 00:15:41.449 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.706 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.706 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.707 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.707 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.707 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.707 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.707 19:10:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.964 19:10:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.897 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.156 19:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.721 00:15:43.721 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.721 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.721 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.979 { 00:15:43.979 "cntlid": 81, 00:15:43.979 "qid": 0, 00:15:43.979 "state": "enabled", 00:15:43.979 "thread": "nvmf_tgt_poll_group_000", 00:15:43.979 "listen_address": { 00:15:43.979 "trtype": "TCP", 00:15:43.979 "adrfam": "IPv4", 00:15:43.979 "traddr": "10.0.0.2", 00:15:43.979 "trsvcid": "4420" 00:15:43.979 }, 00:15:43.979 "peer_address": { 00:15:43.979 "trtype": "TCP", 00:15:43.979 "adrfam": "IPv4", 00:15:43.979 "traddr": "10.0.0.1", 00:15:43.979 "trsvcid": "42948" 00:15:43.979 }, 00:15:43.979 "auth": { 00:15:43.979 "state": "completed", 00:15:43.979 "digest": "sha384", 00:15:43.979 "dhgroup": "ffdhe6144" 00:15:43.979 } 00:15:43.979 } 00:15:43.979 ]' 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.979 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.237 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.237 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.237 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.237 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.237 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.495 19:10:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.428 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.686 19:10:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.250 00:15:46.250 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.250 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.250 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.506 { 00:15:46.506 "cntlid": 83, 00:15:46.506 "qid": 0, 00:15:46.506 "state": "enabled", 00:15:46.506 "thread": "nvmf_tgt_poll_group_000", 00:15:46.506 "listen_address": { 00:15:46.506 "trtype": "TCP", 00:15:46.506 "adrfam": "IPv4", 00:15:46.506 "traddr": "10.0.0.2", 00:15:46.506 "trsvcid": "4420" 00:15:46.506 }, 00:15:46.506 "peer_address": { 00:15:46.506 "trtype": "TCP", 00:15:46.506 "adrfam": "IPv4", 00:15:46.506 "traddr": "10.0.0.1", 00:15:46.506 "trsvcid": "42978" 00:15:46.506 }, 00:15:46.506 "auth": { 00:15:46.506 "state": "completed", 00:15:46.506 "digest": "sha384", 00:15:46.506 "dhgroup": "ffdhe6144" 00:15:46.506 } 00:15:46.506 } 00:15:46.506 ]' 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.506 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:46.507 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.507 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.507 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.507 19:10:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.763 19:10:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.694 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.258 19:10:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.823 00:15:48.823 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.823 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.823 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.081 { 00:15:49.081 "cntlid": 85, 00:15:49.081 "qid": 0, 00:15:49.081 "state": "enabled", 00:15:49.081 "thread": "nvmf_tgt_poll_group_000", 00:15:49.081 "listen_address": { 00:15:49.081 "trtype": "TCP", 00:15:49.081 "adrfam": "IPv4", 00:15:49.081 "traddr": "10.0.0.2", 00:15:49.081 "trsvcid": "4420" 00:15:49.081 }, 00:15:49.081 "peer_address": { 00:15:49.081 "trtype": "TCP", 00:15:49.081 "adrfam": "IPv4", 00:15:49.081 "traddr": "10.0.0.1", 00:15:49.081 "trsvcid": "43014" 00:15:49.081 }, 00:15:49.081 "auth": { 00:15:49.081 "state": "completed", 00:15:49.081 "digest": "sha384", 00:15:49.081 "dhgroup": "ffdhe6144" 00:15:49.081 } 00:15:49.081 } 00:15:49.081 ]' 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.081 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.339 19:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.271 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.836 19:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.837 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.837 19:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.432 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.432 { 00:15:51.432 "cntlid": 87, 00:15:51.432 "qid": 0, 00:15:51.432 "state": "enabled", 00:15:51.432 "thread": "nvmf_tgt_poll_group_000", 00:15:51.432 "listen_address": { 00:15:51.432 "trtype": "TCP", 00:15:51.432 "adrfam": "IPv4", 00:15:51.432 "traddr": "10.0.0.2", 00:15:51.432 "trsvcid": "4420" 00:15:51.432 }, 00:15:51.432 "peer_address": { 00:15:51.432 "trtype": "TCP", 00:15:51.432 "adrfam": "IPv4", 00:15:51.432 "traddr": "10.0.0.1", 00:15:51.432 "trsvcid": "60696" 00:15:51.432 }, 00:15:51.432 "auth": { 00:15:51.432 "state": "completed", 00:15:51.432 "digest": "sha384", 00:15:51.432 "dhgroup": "ffdhe6144" 00:15:51.432 } 00:15:51.432 } 00:15:51.432 ]' 00:15:51.432 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.689 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.689 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.689 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.689 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.689 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.689 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.689 19:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.946 19:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.878 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.135 19:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.088 00:15:54.088 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.088 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.088 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.346 { 00:15:54.346 "cntlid": 89, 00:15:54.346 "qid": 0, 00:15:54.346 "state": "enabled", 00:15:54.346 "thread": "nvmf_tgt_poll_group_000", 00:15:54.346 "listen_address": { 00:15:54.346 "trtype": "TCP", 00:15:54.346 "adrfam": "IPv4", 00:15:54.346 "traddr": "10.0.0.2", 00:15:54.346 "trsvcid": "4420" 00:15:54.346 }, 00:15:54.346 "peer_address": { 00:15:54.346 "trtype": "TCP", 00:15:54.346 "adrfam": "IPv4", 00:15:54.346 "traddr": "10.0.0.1", 00:15:54.346 "trsvcid": "60730" 00:15:54.346 }, 00:15:54.346 "auth": { 00:15:54.346 "state": "completed", 00:15:54.346 "digest": "sha384", 00:15:54.346 "dhgroup": "ffdhe8192" 00:15:54.346 } 00:15:54.346 } 00:15:54.346 ]' 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.346 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.603 19:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.535 19:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.793 19:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.727 00:15:56.727 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.727 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.727 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.985 { 00:15:56.985 "cntlid": 91, 00:15:56.985 "qid": 0, 00:15:56.985 "state": "enabled", 00:15:56.985 "thread": "nvmf_tgt_poll_group_000", 00:15:56.985 "listen_address": { 00:15:56.985 "trtype": "TCP", 00:15:56.985 "adrfam": "IPv4", 00:15:56.985 "traddr": "10.0.0.2", 00:15:56.985 "trsvcid": "4420" 00:15:56.985 }, 00:15:56.985 "peer_address": { 00:15:56.985 "trtype": "TCP", 00:15:56.985 "adrfam": "IPv4", 00:15:56.985 "traddr": "10.0.0.1", 00:15:56.985 "trsvcid": "60762" 00:15:56.985 }, 00:15:56.985 "auth": { 00:15:56.985 "state": "completed", 00:15:56.985 "digest": "sha384", 00:15:56.985 "dhgroup": "ffdhe8192" 00:15:56.985 } 00:15:56.985 } 00:15:56.985 ]' 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.985 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.243 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.243 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.243 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.501 19:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.435 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.702 19:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.636 00:15:59.636 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.636 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.636 19:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.894 { 00:15:59.894 "cntlid": 93, 00:15:59.894 "qid": 0, 00:15:59.894 "state": "enabled", 00:15:59.894 "thread": "nvmf_tgt_poll_group_000", 00:15:59.894 "listen_address": { 00:15:59.894 "trtype": "TCP", 00:15:59.894 "adrfam": "IPv4", 00:15:59.894 "traddr": "10.0.0.2", 00:15:59.894 "trsvcid": "4420" 00:15:59.894 }, 00:15:59.894 "peer_address": { 00:15:59.894 "trtype": "TCP", 00:15:59.894 "adrfam": "IPv4", 00:15:59.894 "traddr": "10.0.0.1", 00:15:59.894 "trsvcid": "60790" 00:15:59.894 }, 00:15:59.894 "auth": { 00:15:59.894 "state": "completed", 00:15:59.894 "digest": "sha384", 00:15:59.894 "dhgroup": "ffdhe8192" 00:15:59.894 } 00:15:59.894 } 00:15:59.894 ]' 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.894 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.152 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.152 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.152 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.410 19:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.343 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.602 19:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.535 00:16:02.535 19:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.535 19:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.535 19:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.793 { 00:16:02.793 "cntlid": 95, 00:16:02.793 "qid": 0, 00:16:02.793 "state": "enabled", 00:16:02.793 "thread": "nvmf_tgt_poll_group_000", 00:16:02.793 "listen_address": { 00:16:02.793 "trtype": "TCP", 00:16:02.793 "adrfam": "IPv4", 00:16:02.793 "traddr": "10.0.0.2", 00:16:02.793 "trsvcid": "4420" 00:16:02.793 }, 00:16:02.793 "peer_address": { 00:16:02.793 "trtype": "TCP", 00:16:02.793 "adrfam": "IPv4", 00:16:02.793 "traddr": "10.0.0.1", 00:16:02.793 "trsvcid": "60012" 00:16:02.793 }, 00:16:02.793 "auth": { 00:16:02.793 "state": "completed", 00:16:02.793 "digest": "sha384", 00:16:02.793 "dhgroup": "ffdhe8192" 00:16:02.793 } 00:16:02.793 } 00:16:02.793 ]' 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.793 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.049 19:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.983 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.240 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:04.240 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.240 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.240 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:04.240 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.240 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.241 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.241 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.241 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.241 19:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.241 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.241 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.499 00:16:04.757 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.757 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.757 19:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.757 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.757 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.757 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.757 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.015 19:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.015 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.015 { 00:16:05.015 "cntlid": 97, 00:16:05.015 "qid": 0, 00:16:05.015 "state": "enabled", 00:16:05.015 "thread": "nvmf_tgt_poll_group_000", 00:16:05.015 "listen_address": { 00:16:05.015 "trtype": "TCP", 00:16:05.015 "adrfam": "IPv4", 00:16:05.015 "traddr": "10.0.0.2", 00:16:05.015 "trsvcid": "4420" 00:16:05.015 }, 00:16:05.015 "peer_address": { 00:16:05.015 "trtype": "TCP", 00:16:05.015 "adrfam": "IPv4", 00:16:05.015 "traddr": "10.0.0.1", 00:16:05.015 "trsvcid": "60042" 00:16:05.015 }, 00:16:05.015 "auth": { 00:16:05.015 "state": "completed", 00:16:05.015 "digest": "sha512", 00:16:05.015 "dhgroup": "null" 00:16:05.015 } 00:16:05.016 } 00:16:05.016 ]' 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.016 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.274 19:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.207 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.467 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:06.467 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.467 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.467 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.468 19:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.757 00:16:06.757 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.757 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.757 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.014 { 00:16:07.014 "cntlid": 99, 00:16:07.014 "qid": 0, 00:16:07.014 "state": "enabled", 00:16:07.014 "thread": "nvmf_tgt_poll_group_000", 00:16:07.014 "listen_address": { 00:16:07.014 "trtype": "TCP", 00:16:07.014 "adrfam": "IPv4", 00:16:07.014 "traddr": "10.0.0.2", 00:16:07.014 "trsvcid": "4420" 00:16:07.014 }, 00:16:07.014 "peer_address": { 00:16:07.014 "trtype": "TCP", 00:16:07.014 "adrfam": "IPv4", 00:16:07.014 "traddr": "10.0.0.1", 00:16:07.014 "trsvcid": "60068" 00:16:07.014 }, 00:16:07.014 "auth": { 00:16:07.014 "state": "completed", 00:16:07.014 "digest": "sha512", 00:16:07.014 "dhgroup": "null" 00:16:07.014 } 00:16:07.014 } 00:16:07.014 ]' 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.014 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.273 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:07.273 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.273 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.273 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.273 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.531 19:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.464 19:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.722 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.980 00:16:08.980 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.980 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.980 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.238 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.238 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.238 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.238 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.238 19:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.238 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.238 { 00:16:09.238 "cntlid": 101, 00:16:09.238 "qid": 0, 00:16:09.238 "state": "enabled", 00:16:09.238 "thread": "nvmf_tgt_poll_group_000", 00:16:09.238 "listen_address": { 00:16:09.238 "trtype": "TCP", 00:16:09.238 "adrfam": "IPv4", 00:16:09.238 "traddr": "10.0.0.2", 00:16:09.238 "trsvcid": "4420" 00:16:09.238 }, 00:16:09.238 "peer_address": { 00:16:09.238 "trtype": "TCP", 00:16:09.238 "adrfam": "IPv4", 00:16:09.238 "traddr": "10.0.0.1", 00:16:09.238 "trsvcid": "60110" 00:16:09.238 }, 00:16:09.238 "auth": { 00:16:09.238 "state": "completed", 00:16:09.238 "digest": "sha512", 00:16:09.238 "dhgroup": "null" 00:16:09.238 } 00:16:09.238 } 00:16:09.238 ]' 00:16:09.238 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.496 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.496 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.496 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:09.496 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.496 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.496 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.496 19:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.754 19:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.705 19:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.963 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.221 00:16:11.221 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.221 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.221 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.479 { 00:16:11.479 "cntlid": 103, 00:16:11.479 "qid": 0, 00:16:11.479 "state": "enabled", 00:16:11.479 "thread": "nvmf_tgt_poll_group_000", 00:16:11.479 "listen_address": { 00:16:11.479 "trtype": "TCP", 00:16:11.479 "adrfam": "IPv4", 00:16:11.479 "traddr": "10.0.0.2", 00:16:11.479 "trsvcid": "4420" 00:16:11.479 }, 00:16:11.479 "peer_address": { 00:16:11.479 "trtype": "TCP", 00:16:11.479 "adrfam": "IPv4", 00:16:11.479 "traddr": "10.0.0.1", 00:16:11.479 "trsvcid": "48220" 00:16:11.479 }, 00:16:11.479 "auth": { 00:16:11.479 "state": "completed", 00:16:11.479 "digest": "sha512", 00:16:11.479 "dhgroup": "null" 00:16:11.479 } 00:16:11.479 } 00:16:11.479 ]' 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.479 19:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.738 19:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.112 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.369 00:16:13.369 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.369 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.369 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.627 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.627 19:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.627 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.627 19:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.627 19:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.627 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.627 { 00:16:13.627 "cntlid": 105, 00:16:13.627 "qid": 0, 00:16:13.627 "state": "enabled", 00:16:13.627 "thread": "nvmf_tgt_poll_group_000", 00:16:13.627 "listen_address": { 00:16:13.627 "trtype": "TCP", 00:16:13.627 "adrfam": "IPv4", 00:16:13.627 "traddr": "10.0.0.2", 00:16:13.627 "trsvcid": "4420" 00:16:13.627 }, 00:16:13.627 "peer_address": { 00:16:13.627 "trtype": "TCP", 00:16:13.627 "adrfam": "IPv4", 00:16:13.627 "traddr": "10.0.0.1", 00:16:13.627 "trsvcid": "48258" 00:16:13.627 }, 00:16:13.627 "auth": { 00:16:13.627 "state": "completed", 00:16:13.627 "digest": "sha512", 00:16:13.627 "dhgroup": "ffdhe2048" 00:16:13.627 } 00:16:13.627 } 00:16:13.627 ]' 00:16:13.627 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.627 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.627 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.884 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.884 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.884 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.884 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.885 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.143 19:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.077 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.335 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:15.335 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.335 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.335 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:15.335 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.335 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.336 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.336 19:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.336 19:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.336 19:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.336 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.336 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.593 00:16:15.593 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.593 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.593 19:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.851 { 00:16:15.851 "cntlid": 107, 00:16:15.851 "qid": 0, 00:16:15.851 "state": "enabled", 00:16:15.851 "thread": "nvmf_tgt_poll_group_000", 00:16:15.851 "listen_address": { 00:16:15.851 "trtype": "TCP", 00:16:15.851 "adrfam": "IPv4", 00:16:15.851 "traddr": "10.0.0.2", 00:16:15.851 "trsvcid": "4420" 00:16:15.851 }, 00:16:15.851 "peer_address": { 00:16:15.851 "trtype": "TCP", 00:16:15.851 "adrfam": "IPv4", 00:16:15.851 "traddr": "10.0.0.1", 00:16:15.851 "trsvcid": "48282" 00:16:15.851 }, 00:16:15.851 "auth": { 00:16:15.851 "state": "completed", 00:16:15.851 "digest": "sha512", 00:16:15.851 "dhgroup": "ffdhe2048" 00:16:15.851 } 00:16:15.851 } 00:16:15.851 ]' 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.851 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.109 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.109 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.109 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.109 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.109 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.367 19:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.301 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.559 19:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.817 00:16:17.817 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.817 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.817 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.075 { 00:16:18.075 "cntlid": 109, 00:16:18.075 "qid": 0, 00:16:18.075 "state": "enabled", 00:16:18.075 "thread": "nvmf_tgt_poll_group_000", 00:16:18.075 "listen_address": { 00:16:18.075 "trtype": "TCP", 00:16:18.075 "adrfam": "IPv4", 00:16:18.075 "traddr": "10.0.0.2", 00:16:18.075 "trsvcid": "4420" 00:16:18.075 }, 00:16:18.075 "peer_address": { 00:16:18.075 "trtype": "TCP", 00:16:18.075 "adrfam": "IPv4", 00:16:18.075 "traddr": "10.0.0.1", 00:16:18.075 "trsvcid": "48310" 00:16:18.075 }, 00:16:18.075 "auth": { 00:16:18.075 "state": "completed", 00:16:18.075 "digest": "sha512", 00:16:18.075 "dhgroup": "ffdhe2048" 00:16:18.075 } 00:16:18.075 } 00:16:18.075 ]' 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.075 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.332 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.332 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.332 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.332 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.332 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.590 19:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.523 19:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.781 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:19.781 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.782 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.039 00:16:20.039 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.039 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.039 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.297 { 00:16:20.297 "cntlid": 111, 00:16:20.297 "qid": 0, 00:16:20.297 "state": "enabled", 00:16:20.297 "thread": "nvmf_tgt_poll_group_000", 00:16:20.297 "listen_address": { 00:16:20.297 "trtype": "TCP", 00:16:20.297 "adrfam": "IPv4", 00:16:20.297 "traddr": "10.0.0.2", 00:16:20.297 "trsvcid": "4420" 00:16:20.297 }, 00:16:20.297 "peer_address": { 00:16:20.297 "trtype": "TCP", 00:16:20.297 "adrfam": "IPv4", 00:16:20.297 "traddr": "10.0.0.1", 00:16:20.297 "trsvcid": "48338" 00:16:20.297 }, 00:16:20.297 "auth": { 00:16:20.297 "state": "completed", 00:16:20.297 "digest": "sha512", 00:16:20.297 "dhgroup": "ffdhe2048" 00:16:20.297 } 00:16:20.297 } 00:16:20.297 ]' 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.297 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.555 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.555 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.555 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.555 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.555 19:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.813 19:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.781 19:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.039 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.296 00:16:22.296 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.296 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.296 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.554 { 00:16:22.554 "cntlid": 113, 00:16:22.554 "qid": 0, 00:16:22.554 "state": "enabled", 00:16:22.554 "thread": "nvmf_tgt_poll_group_000", 00:16:22.554 "listen_address": { 00:16:22.554 "trtype": "TCP", 00:16:22.554 "adrfam": "IPv4", 00:16:22.554 "traddr": "10.0.0.2", 00:16:22.554 "trsvcid": "4420" 00:16:22.554 }, 00:16:22.554 "peer_address": { 00:16:22.554 "trtype": "TCP", 00:16:22.554 "adrfam": "IPv4", 00:16:22.554 "traddr": "10.0.0.1", 00:16:22.554 "trsvcid": "45614" 00:16:22.554 }, 00:16:22.554 "auth": { 00:16:22.554 "state": "completed", 00:16:22.554 "digest": "sha512", 00:16:22.554 "dhgroup": "ffdhe3072" 00:16:22.554 } 00:16:22.554 } 00:16:22.554 ]' 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.554 19:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.811 19:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.745 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.003 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.568 00:16:24.568 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.568 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.568 19:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.825 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.825 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.825 19:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.825 19:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.825 19:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.825 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.825 { 00:16:24.825 "cntlid": 115, 00:16:24.825 "qid": 0, 00:16:24.825 "state": "enabled", 00:16:24.825 "thread": "nvmf_tgt_poll_group_000", 00:16:24.825 "listen_address": { 00:16:24.825 "trtype": "TCP", 00:16:24.825 "adrfam": "IPv4", 00:16:24.825 "traddr": "10.0.0.2", 00:16:24.825 "trsvcid": "4420" 00:16:24.825 }, 00:16:24.825 "peer_address": { 00:16:24.825 "trtype": "TCP", 00:16:24.825 "adrfam": "IPv4", 00:16:24.826 "traddr": "10.0.0.1", 00:16:24.826 "trsvcid": "45656" 00:16:24.826 }, 00:16:24.826 "auth": { 00:16:24.826 "state": "completed", 00:16:24.826 "digest": "sha512", 00:16:24.826 "dhgroup": "ffdhe3072" 00:16:24.826 } 00:16:24.826 } 00:16:24.826 ]' 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.826 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.083 19:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.453 19:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.015 00:16:27.015 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.015 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.015 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.272 { 00:16:27.272 "cntlid": 117, 00:16:27.272 "qid": 0, 00:16:27.272 "state": "enabled", 00:16:27.272 "thread": "nvmf_tgt_poll_group_000", 00:16:27.272 "listen_address": { 00:16:27.272 "trtype": "TCP", 00:16:27.272 "adrfam": "IPv4", 00:16:27.272 "traddr": "10.0.0.2", 00:16:27.272 "trsvcid": "4420" 00:16:27.272 }, 00:16:27.272 "peer_address": { 00:16:27.272 "trtype": "TCP", 00:16:27.272 "adrfam": "IPv4", 00:16:27.272 "traddr": "10.0.0.1", 00:16:27.272 "trsvcid": "45688" 00:16:27.272 }, 00:16:27.272 "auth": { 00:16:27.272 "state": "completed", 00:16:27.272 "digest": "sha512", 00:16:27.272 "dhgroup": "ffdhe3072" 00:16:27.272 } 00:16:27.272 } 00:16:27.272 ]' 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.272 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.529 19:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.472 19:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.729 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.987 00:16:28.987 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.987 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.987 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.244 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.244 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.244 19:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.244 19:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.244 19:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.244 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.244 { 00:16:29.244 "cntlid": 119, 00:16:29.244 "qid": 0, 00:16:29.244 "state": "enabled", 00:16:29.244 "thread": "nvmf_tgt_poll_group_000", 00:16:29.244 "listen_address": { 00:16:29.244 "trtype": "TCP", 00:16:29.244 "adrfam": "IPv4", 00:16:29.244 "traddr": "10.0.0.2", 00:16:29.244 "trsvcid": "4420" 00:16:29.244 }, 00:16:29.244 "peer_address": { 00:16:29.244 "trtype": "TCP", 00:16:29.244 "adrfam": "IPv4", 00:16:29.244 "traddr": "10.0.0.1", 00:16:29.244 "trsvcid": "45726" 00:16:29.244 }, 00:16:29.244 "auth": { 00:16:29.244 "state": "completed", 00:16:29.244 "digest": "sha512", 00:16:29.244 "dhgroup": "ffdhe3072" 00:16:29.244 } 00:16:29.244 } 00:16:29.244 ]' 00:16:29.244 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.501 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.501 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.501 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.501 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.501 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.501 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.501 19:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.759 19:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:16:30.691 19:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.692 19:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.949 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.206 00:16:31.206 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.206 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.206 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.463 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.463 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.463 19:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.463 19:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.463 19:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.463 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.463 { 00:16:31.463 "cntlid": 121, 00:16:31.463 "qid": 0, 00:16:31.463 "state": "enabled", 00:16:31.463 "thread": "nvmf_tgt_poll_group_000", 00:16:31.463 "listen_address": { 00:16:31.463 "trtype": "TCP", 00:16:31.463 "adrfam": "IPv4", 00:16:31.463 "traddr": "10.0.0.2", 00:16:31.463 "trsvcid": "4420" 00:16:31.463 }, 00:16:31.463 "peer_address": { 00:16:31.463 "trtype": "TCP", 00:16:31.463 "adrfam": "IPv4", 00:16:31.463 "traddr": "10.0.0.1", 00:16:31.463 "trsvcid": "36482" 00:16:31.463 }, 00:16:31.463 "auth": { 00:16:31.463 "state": "completed", 00:16:31.463 "digest": "sha512", 00:16:31.463 "dhgroup": "ffdhe4096" 00:16:31.463 } 00:16:31.463 } 00:16:31.463 ]' 00:16:31.463 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.720 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.720 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.720 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.720 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.720 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.720 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.721 19:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.978 19:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.908 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.166 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.730 00:16:33.730 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.730 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.730 19:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.988 { 00:16:33.988 "cntlid": 123, 00:16:33.988 "qid": 0, 00:16:33.988 "state": "enabled", 00:16:33.988 "thread": "nvmf_tgt_poll_group_000", 00:16:33.988 "listen_address": { 00:16:33.988 "trtype": "TCP", 00:16:33.988 "adrfam": "IPv4", 00:16:33.988 "traddr": "10.0.0.2", 00:16:33.988 "trsvcid": "4420" 00:16:33.988 }, 00:16:33.988 "peer_address": { 00:16:33.988 "trtype": "TCP", 00:16:33.988 "adrfam": "IPv4", 00:16:33.988 "traddr": "10.0.0.1", 00:16:33.988 "trsvcid": "36510" 00:16:33.988 }, 00:16:33.988 "auth": { 00:16:33.988 "state": "completed", 00:16:33.988 "digest": "sha512", 00:16:33.988 "dhgroup": "ffdhe4096" 00:16:33.988 } 00:16:33.988 } 00:16:33.988 ]' 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.988 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.246 19:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.180 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.438 19:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.004 00:16:36.004 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.004 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.004 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.280 { 00:16:36.280 "cntlid": 125, 00:16:36.280 "qid": 0, 00:16:36.280 "state": "enabled", 00:16:36.280 "thread": "nvmf_tgt_poll_group_000", 00:16:36.280 "listen_address": { 00:16:36.280 "trtype": "TCP", 00:16:36.280 "adrfam": "IPv4", 00:16:36.280 "traddr": "10.0.0.2", 00:16:36.280 "trsvcid": "4420" 00:16:36.280 }, 00:16:36.280 "peer_address": { 00:16:36.280 "trtype": "TCP", 00:16:36.280 "adrfam": "IPv4", 00:16:36.280 "traddr": "10.0.0.1", 00:16:36.280 "trsvcid": "36536" 00:16:36.280 }, 00:16:36.280 "auth": { 00:16:36.280 "state": "completed", 00:16:36.280 "digest": "sha512", 00:16:36.280 "dhgroup": "ffdhe4096" 00:16:36.280 } 00:16:36.280 } 00:16:36.280 ]' 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.280 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.539 19:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:16:37.504 19:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.505 19:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.505 19:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.505 19:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.505 19:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.505 19:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.505 19:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.505 19:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.763 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.329 00:16:38.329 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.329 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.329 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.587 { 00:16:38.587 "cntlid": 127, 00:16:38.587 "qid": 0, 00:16:38.587 "state": "enabled", 00:16:38.587 "thread": "nvmf_tgt_poll_group_000", 00:16:38.587 "listen_address": { 00:16:38.587 "trtype": "TCP", 00:16:38.587 "adrfam": "IPv4", 00:16:38.587 "traddr": "10.0.0.2", 00:16:38.587 "trsvcid": "4420" 00:16:38.587 }, 00:16:38.587 "peer_address": { 00:16:38.587 "trtype": "TCP", 00:16:38.587 "adrfam": "IPv4", 00:16:38.587 "traddr": "10.0.0.1", 00:16:38.587 "trsvcid": "36564" 00:16:38.587 }, 00:16:38.587 "auth": { 00:16:38.587 "state": "completed", 00:16:38.587 "digest": "sha512", 00:16:38.587 "dhgroup": "ffdhe4096" 00:16:38.587 } 00:16:38.587 } 00:16:38.587 ]' 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.587 19:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.845 19:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.779 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.037 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.603 00:16:40.603 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.603 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.603 19:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.861 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.861 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.861 19:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.861 19:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.861 19:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.861 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.861 { 00:16:40.861 "cntlid": 129, 00:16:40.861 "qid": 0, 00:16:40.861 "state": "enabled", 00:16:40.861 "thread": "nvmf_tgt_poll_group_000", 00:16:40.861 "listen_address": { 00:16:40.861 "trtype": "TCP", 00:16:40.861 "adrfam": "IPv4", 00:16:40.861 "traddr": "10.0.0.2", 00:16:40.861 "trsvcid": "4420" 00:16:40.861 }, 00:16:40.861 "peer_address": { 00:16:40.861 "trtype": "TCP", 00:16:40.862 "adrfam": "IPv4", 00:16:40.862 "traddr": "10.0.0.1", 00:16:40.862 "trsvcid": "36586" 00:16:40.862 }, 00:16:40.862 "auth": { 00:16:40.862 "state": "completed", 00:16:40.862 "digest": "sha512", 00:16:40.862 "dhgroup": "ffdhe6144" 00:16:40.862 } 00:16:40.862 } 00:16:40.862 ]' 00:16:40.862 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.862 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.862 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.119 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.119 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.119 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.119 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.119 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.377 19:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.307 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.565 19:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.129 00:16:43.129 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.129 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.129 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.386 { 00:16:43.386 "cntlid": 131, 00:16:43.386 "qid": 0, 00:16:43.386 "state": "enabled", 00:16:43.386 "thread": "nvmf_tgt_poll_group_000", 00:16:43.386 "listen_address": { 00:16:43.386 "trtype": "TCP", 00:16:43.386 "adrfam": "IPv4", 00:16:43.386 "traddr": "10.0.0.2", 00:16:43.386 "trsvcid": "4420" 00:16:43.386 }, 00:16:43.386 "peer_address": { 00:16:43.386 "trtype": "TCP", 00:16:43.386 "adrfam": "IPv4", 00:16:43.386 "traddr": "10.0.0.1", 00:16:43.386 "trsvcid": "50018" 00:16:43.386 }, 00:16:43.386 "auth": { 00:16:43.386 "state": "completed", 00:16:43.386 "digest": "sha512", 00:16:43.386 "dhgroup": "ffdhe6144" 00:16:43.386 } 00:16:43.386 } 00:16:43.386 ]' 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.386 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.387 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.387 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.387 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.387 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.387 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.387 19:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.643 19:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:16:44.571 19:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.571 19:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.571 19:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.571 19:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.571 19:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.571 19:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.571 19:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.572 19:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.135 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.392 00:16:45.650 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.650 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.650 19:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.650 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.650 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.650 19:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.650 19:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.908 { 00:16:45.908 "cntlid": 133, 00:16:45.908 "qid": 0, 00:16:45.908 "state": "enabled", 00:16:45.908 "thread": "nvmf_tgt_poll_group_000", 00:16:45.908 "listen_address": { 00:16:45.908 "trtype": "TCP", 00:16:45.908 "adrfam": "IPv4", 00:16:45.908 "traddr": "10.0.0.2", 00:16:45.908 "trsvcid": "4420" 00:16:45.908 }, 00:16:45.908 "peer_address": { 00:16:45.908 "trtype": "TCP", 00:16:45.908 "adrfam": "IPv4", 00:16:45.908 "traddr": "10.0.0.1", 00:16:45.908 "trsvcid": "50050" 00:16:45.908 }, 00:16:45.908 "auth": { 00:16:45.908 "state": "completed", 00:16:45.908 "digest": "sha512", 00:16:45.908 "dhgroup": "ffdhe6144" 00:16:45.908 } 00:16:45.908 } 00:16:45.908 ]' 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.908 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.166 19:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.101 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.359 19:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.925 00:16:47.925 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.925 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.925 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.183 { 00:16:48.183 "cntlid": 135, 00:16:48.183 "qid": 0, 00:16:48.183 "state": "enabled", 00:16:48.183 "thread": "nvmf_tgt_poll_group_000", 00:16:48.183 "listen_address": { 00:16:48.183 "trtype": "TCP", 00:16:48.183 "adrfam": "IPv4", 00:16:48.183 "traddr": "10.0.0.2", 00:16:48.183 "trsvcid": "4420" 00:16:48.183 }, 00:16:48.183 "peer_address": { 00:16:48.183 "trtype": "TCP", 00:16:48.183 "adrfam": "IPv4", 00:16:48.183 "traddr": "10.0.0.1", 00:16:48.183 "trsvcid": "50078" 00:16:48.183 }, 00:16:48.183 "auth": { 00:16:48.183 "state": "completed", 00:16:48.183 "digest": "sha512", 00:16:48.183 "dhgroup": "ffdhe6144" 00:16:48.183 } 00:16:48.183 } 00:16:48.183 ]' 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.183 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.441 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.441 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.441 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.699 19:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:16:49.633 19:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.633 19:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.634 19:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.634 19:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.634 19:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.634 19:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.634 19:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.634 19:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.634 19:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.892 19:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.825 00:16:50.825 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.825 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.825 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.083 { 00:16:51.083 "cntlid": 137, 00:16:51.083 "qid": 0, 00:16:51.083 "state": "enabled", 00:16:51.083 "thread": "nvmf_tgt_poll_group_000", 00:16:51.083 "listen_address": { 00:16:51.083 "trtype": "TCP", 00:16:51.083 "adrfam": "IPv4", 00:16:51.083 "traddr": "10.0.0.2", 00:16:51.083 "trsvcid": "4420" 00:16:51.083 }, 00:16:51.083 "peer_address": { 00:16:51.083 "trtype": "TCP", 00:16:51.083 "adrfam": "IPv4", 00:16:51.083 "traddr": "10.0.0.1", 00:16:51.083 "trsvcid": "50090" 00:16:51.083 }, 00:16:51.083 "auth": { 00:16:51.083 "state": "completed", 00:16:51.083 "digest": "sha512", 00:16:51.083 "dhgroup": "ffdhe8192" 00:16:51.083 } 00:16:51.083 } 00:16:51.083 ]' 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.083 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.340 19:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.330 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.588 19:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.522 00:16:53.522 19:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.522 19:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.522 19:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.780 { 00:16:53.780 "cntlid": 139, 00:16:53.780 "qid": 0, 00:16:53.780 "state": "enabled", 00:16:53.780 "thread": "nvmf_tgt_poll_group_000", 00:16:53.780 "listen_address": { 00:16:53.780 "trtype": "TCP", 00:16:53.780 "adrfam": "IPv4", 00:16:53.780 "traddr": "10.0.0.2", 00:16:53.780 "trsvcid": "4420" 00:16:53.780 }, 00:16:53.780 "peer_address": { 00:16:53.780 "trtype": "TCP", 00:16:53.780 "adrfam": "IPv4", 00:16:53.780 "traddr": "10.0.0.1", 00:16:53.780 "trsvcid": "37542" 00:16:53.780 }, 00:16:53.780 "auth": { 00:16:53.780 "state": "completed", 00:16:53.780 "digest": "sha512", 00:16:53.780 "dhgroup": "ffdhe8192" 00:16:53.780 } 00:16:53.780 } 00:16:53.780 ]' 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.780 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.345 19:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjMxOTRhMzVmMDlhMmI3ODQ5ZDkyOGZiMzdlYTliZDa+EcJQ: --dhchap-ctrl-secret DHHC-1:02:YTg0OWFlZjFhODdhNGY2MWZjZDg2NGYzN2UyNGFkMjBmMzdjNDk1MzM0MjFkZjIxzdfuyw==: 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.279 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.537 19:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.470 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.470 { 00:16:56.470 "cntlid": 141, 00:16:56.470 "qid": 0, 00:16:56.470 "state": "enabled", 00:16:56.470 "thread": "nvmf_tgt_poll_group_000", 00:16:56.470 "listen_address": { 00:16:56.470 "trtype": "TCP", 00:16:56.470 "adrfam": "IPv4", 00:16:56.470 "traddr": "10.0.0.2", 00:16:56.470 "trsvcid": "4420" 00:16:56.470 }, 00:16:56.470 "peer_address": { 00:16:56.470 "trtype": "TCP", 00:16:56.470 "adrfam": "IPv4", 00:16:56.470 "traddr": "10.0.0.1", 00:16:56.470 "trsvcid": "37574" 00:16:56.470 }, 00:16:56.470 "auth": { 00:16:56.470 "state": "completed", 00:16:56.470 "digest": "sha512", 00:16:56.470 "dhgroup": "ffdhe8192" 00:16:56.470 } 00:16:56.470 } 00:16:56.470 ]' 00:16:56.470 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.728 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.728 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.728 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.728 19:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.728 19:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.728 19:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.728 19:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.985 19:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzcyZGFmMGQxOWFkZjljNGY1NzNjOGRkNjMxZjZkZTk5ZGE3ZTE1Y2U3MDg4NjE0vPNdcA==: --dhchap-ctrl-secret DHHC-1:01:YWUxZmFiMDdiNmIyZjBkOGU4NzMwNjc0OGU2ODM2MDcfie76: 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.921 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.179 19:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.113 00:16:59.113 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.113 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.113 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.370 { 00:16:59.370 "cntlid": 143, 00:16:59.370 "qid": 0, 00:16:59.370 "state": "enabled", 00:16:59.370 "thread": "nvmf_tgt_poll_group_000", 00:16:59.370 "listen_address": { 00:16:59.370 "trtype": "TCP", 00:16:59.370 "adrfam": "IPv4", 00:16:59.370 "traddr": "10.0.0.2", 00:16:59.370 "trsvcid": "4420" 00:16:59.370 }, 00:16:59.370 "peer_address": { 00:16:59.370 "trtype": "TCP", 00:16:59.370 "adrfam": "IPv4", 00:16:59.370 "traddr": "10.0.0.1", 00:16:59.370 "trsvcid": "37598" 00:16:59.370 }, 00:16:59.370 "auth": { 00:16:59.370 "state": "completed", 00:16:59.370 "digest": "sha512", 00:16:59.370 "dhgroup": "ffdhe8192" 00:16:59.370 } 00:16:59.370 } 00:16:59.370 ]' 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.370 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.628 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.628 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.628 19:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.886 19:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.830 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.090 19:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.019 00:17:02.019 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.019 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.019 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.276 { 00:17:02.276 "cntlid": 145, 00:17:02.276 "qid": 0, 00:17:02.276 "state": "enabled", 00:17:02.276 "thread": "nvmf_tgt_poll_group_000", 00:17:02.276 "listen_address": { 00:17:02.276 "trtype": "TCP", 00:17:02.276 "adrfam": "IPv4", 00:17:02.276 "traddr": "10.0.0.2", 00:17:02.276 "trsvcid": "4420" 00:17:02.276 }, 00:17:02.276 "peer_address": { 00:17:02.276 "trtype": "TCP", 00:17:02.276 "adrfam": "IPv4", 00:17:02.276 "traddr": "10.0.0.1", 00:17:02.276 "trsvcid": "38388" 00:17:02.276 }, 00:17:02.276 "auth": { 00:17:02.276 "state": "completed", 00:17:02.276 "digest": "sha512", 00:17:02.276 "dhgroup": "ffdhe8192" 00:17:02.276 } 00:17:02.276 } 00:17:02.276 ]' 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.276 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.532 19:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MmFhM2VkMGJmZWY0MWQ5YThiMjk1MGE3NmEzZjE4NjBjOGYyNmE1ZmExYjdjYmVl9q3boA==: --dhchap-ctrl-secret DHHC-1:03:MjZmNThlMjU1NTFmZGQ5ZjU5MmIwMjhjMzExYTAwMGRjMzVhOGQ4MDRmNDE5MTE3ZmU0YTA3ZTQ2OWRiNDE5YhuauEg=: 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.461 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.462 19:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:04.391 request: 00:17:04.391 { 00:17:04.391 "name": "nvme0", 00:17:04.391 "trtype": "tcp", 00:17:04.391 "traddr": "10.0.0.2", 00:17:04.391 "adrfam": "ipv4", 00:17:04.391 "trsvcid": "4420", 00:17:04.391 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:04.391 "prchk_reftag": false, 00:17:04.391 "prchk_guard": false, 00:17:04.391 "hdgst": false, 00:17:04.391 "ddgst": false, 00:17:04.391 "dhchap_key": "key2", 00:17:04.391 "method": "bdev_nvme_attach_controller", 00:17:04.391 "req_id": 1 00:17:04.391 } 00:17:04.391 Got JSON-RPC error response 00:17:04.391 response: 00:17:04.391 { 00:17:04.391 "code": -5, 00:17:04.391 "message": "Input/output error" 00:17:04.391 } 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.391 19:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.321 request: 00:17:05.321 { 00:17:05.321 "name": "nvme0", 00:17:05.321 "trtype": "tcp", 00:17:05.321 "traddr": "10.0.0.2", 00:17:05.321 "adrfam": "ipv4", 00:17:05.321 "trsvcid": "4420", 00:17:05.321 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:05.321 "prchk_reftag": false, 00:17:05.321 "prchk_guard": false, 00:17:05.321 "hdgst": false, 00:17:05.321 "ddgst": false, 00:17:05.321 "dhchap_key": "key1", 00:17:05.321 "dhchap_ctrlr_key": "ckey2", 00:17:05.321 "method": "bdev_nvme_attach_controller", 00:17:05.321 "req_id": 1 00:17:05.321 } 00:17:05.321 Got JSON-RPC error response 00:17:05.321 response: 00:17:05.321 { 00:17:05.321 "code": -5, 00:17:05.321 "message": "Input/output error" 00:17:05.321 } 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.321 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.322 19:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.254 request: 00:17:06.254 { 00:17:06.254 "name": "nvme0", 00:17:06.254 "trtype": "tcp", 00:17:06.254 "traddr": "10.0.0.2", 00:17:06.254 "adrfam": "ipv4", 00:17:06.254 "trsvcid": "4420", 00:17:06.254 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:06.254 "prchk_reftag": false, 00:17:06.254 "prchk_guard": false, 00:17:06.254 "hdgst": false, 00:17:06.254 "ddgst": false, 00:17:06.254 "dhchap_key": "key1", 00:17:06.254 "dhchap_ctrlr_key": "ckey1", 00:17:06.254 "method": "bdev_nvme_attach_controller", 00:17:06.254 "req_id": 1 00:17:06.254 } 00:17:06.254 Got JSON-RPC error response 00:17:06.254 response: 00:17:06.254 { 00:17:06.254 "code": -5, 00:17:06.254 "message": "Input/output error" 00:17:06.254 } 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3296357 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3296357 ']' 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3296357 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3296357 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3296357' 00:17:06.254 killing process with pid 3296357 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3296357 00:17:06.254 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3296357 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3319142 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3319142 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3319142 ']' 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.513 19:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3319142 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3319142 ']' 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.445 19:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.703 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.703 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:07.703 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:07.703 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.703 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.979 19:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.912 00:17:08.912 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.912 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.912 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.170 { 00:17:09.170 "cntlid": 1, 00:17:09.170 "qid": 0, 00:17:09.170 "state": "enabled", 00:17:09.170 "thread": "nvmf_tgt_poll_group_000", 00:17:09.170 "listen_address": { 00:17:09.170 "trtype": "TCP", 00:17:09.170 "adrfam": "IPv4", 00:17:09.170 "traddr": "10.0.0.2", 00:17:09.170 "trsvcid": "4420" 00:17:09.170 }, 00:17:09.170 "peer_address": { 00:17:09.170 "trtype": "TCP", 00:17:09.170 "adrfam": "IPv4", 00:17:09.170 "traddr": "10.0.0.1", 00:17:09.170 "trsvcid": "38442" 00:17:09.170 }, 00:17:09.170 "auth": { 00:17:09.170 "state": "completed", 00:17:09.170 "digest": "sha512", 00:17:09.170 "dhgroup": "ffdhe8192" 00:17:09.170 } 00:17:09.170 } 00:17:09.170 ]' 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.170 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.427 19:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MzE3MTZhMmI0ODBmMWE5YjEzMTg5ZGQ2YjczYzg1ZWY3MTRlZjY5NmYwOTRhZWM1NjgzMjdkMTU1NDBlMDk2YtGo1Yc=: 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:10.360 19:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.618 19:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.875 request: 00:17:10.875 { 00:17:10.875 "name": "nvme0", 00:17:10.875 "trtype": "tcp", 00:17:10.875 "traddr": "10.0.0.2", 00:17:10.875 "adrfam": "ipv4", 00:17:10.875 "trsvcid": "4420", 00:17:10.875 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:10.875 "prchk_reftag": false, 00:17:10.875 "prchk_guard": false, 00:17:10.875 "hdgst": false, 00:17:10.875 "ddgst": false, 00:17:10.875 "dhchap_key": "key3", 00:17:10.875 "method": "bdev_nvme_attach_controller", 00:17:10.875 "req_id": 1 00:17:10.875 } 00:17:10.875 Got JSON-RPC error response 00:17:10.875 response: 00:17:10.875 { 00:17:10.875 "code": -5, 00:17:10.875 "message": "Input/output error" 00:17:10.875 } 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:10.875 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.134 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.392 request: 00:17:11.392 { 00:17:11.392 "name": "nvme0", 00:17:11.392 "trtype": "tcp", 00:17:11.392 "traddr": "10.0.0.2", 00:17:11.392 "adrfam": "ipv4", 00:17:11.392 "trsvcid": "4420", 00:17:11.392 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:11.392 "prchk_reftag": false, 00:17:11.392 "prchk_guard": false, 00:17:11.392 "hdgst": false, 00:17:11.392 "ddgst": false, 00:17:11.392 "dhchap_key": "key3", 00:17:11.392 "method": "bdev_nvme_attach_controller", 00:17:11.392 "req_id": 1 00:17:11.392 } 00:17:11.392 Got JSON-RPC error response 00:17:11.392 response: 00:17:11.392 { 00:17:11.392 "code": -5, 00:17:11.392 "message": "Input/output error" 00:17:11.392 } 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.392 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.651 19:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.909 request: 00:17:11.909 { 00:17:11.909 "name": "nvme0", 00:17:11.909 "trtype": "tcp", 00:17:11.909 "traddr": "10.0.0.2", 00:17:11.909 "adrfam": "ipv4", 00:17:11.909 "trsvcid": "4420", 00:17:11.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:11.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:11.909 "prchk_reftag": false, 00:17:11.909 "prchk_guard": false, 00:17:11.909 "hdgst": false, 00:17:11.909 "ddgst": false, 00:17:11.909 "dhchap_key": "key0", 00:17:11.909 "dhchap_ctrlr_key": "key1", 00:17:11.909 "method": "bdev_nvme_attach_controller", 00:17:11.909 "req_id": 1 00:17:11.909 } 00:17:11.909 Got JSON-RPC error response 00:17:11.909 response: 00:17:11.909 { 00:17:11.909 "code": -5, 00:17:11.909 "message": "Input/output error" 00:17:11.909 } 00:17:11.909 19:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:11.909 19:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.909 19:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.909 19:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.909 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.909 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:12.167 00:17:12.167 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:12.167 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.167 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:12.424 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.424 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.424 19:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3296492 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3296492 ']' 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3296492 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3296492 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3296492' 00:17:12.682 killing process with pid 3296492 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3296492 00:17:12.682 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3296492 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.247 rmmod nvme_tcp 00:17:13.247 rmmod nvme_fabrics 00:17:13.247 rmmod nvme_keyring 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3319142 ']' 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3319142 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3319142 ']' 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3319142 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3319142 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3319142' 00:17:13.247 killing process with pid 3319142 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3319142 00:17:13.247 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3319142 00:17:13.813 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.813 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.813 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.813 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.813 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.813 19:11:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.814 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.814 19:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.713 19:11:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.713 19:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vIb /tmp/spdk.key-sha256.rMr /tmp/spdk.key-sha384.ouj /tmp/spdk.key-sha512.e0s /tmp/spdk.key-sha512.34s /tmp/spdk.key-sha384.vMN /tmp/spdk.key-sha256.aPD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:15.713 00:17:15.713 real 3m10.999s 00:17:15.713 user 7m24.648s 00:17:15.713 sys 0m25.144s 00:17:15.713 19:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:15.713 19:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.713 ************************************ 00:17:15.713 END TEST nvmf_auth_target 00:17:15.713 ************************************ 00:17:15.713 19:11:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:15.713 19:11:56 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:15.713 19:11:56 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.713 19:11:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:15.713 19:11:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.713 19:11:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.713 ************************************ 00:17:15.713 START TEST nvmf_bdevio_no_huge 00:17:15.713 ************************************ 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:15.713 * Looking for test storage... 00:17:15.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.713 19:11:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:17.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:17.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.608 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:17.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:17.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.609 19:11:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.609 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:17:17.867 00:17:17.867 --- 10.0.0.2 ping statistics --- 00:17:17.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.867 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:17:17.867 00:17:17.867 --- 10.0.0.1 ping statistics --- 00:17:17.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.867 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3321944 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3321944 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3321944 ']' 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.867 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.867 [2024-07-15 19:11:58.134014] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:17.867 [2024-07-15 19:11:58.134107] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:17.867 [2024-07-15 19:11:58.217676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.125 [2024-07-15 19:11:58.343813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.125 [2024-07-15 19:11:58.343887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.125 [2024-07-15 19:11:58.343906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.125 [2024-07-15 19:11:58.343920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.125 [2024-07-15 19:11:58.343932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.125 [2024-07-15 19:11:58.344020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.125 [2024-07-15 19:11:58.344077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:18.125 [2024-07-15 19:11:58.344131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:18.125 [2024-07-15 19:11:58.344134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.125 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.125 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:18.125 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.125 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.125 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.125 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.125 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.126 [2024-07-15 19:11:58.473512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.126 Malloc0 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.126 [2024-07-15 19:11:58.511661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:18.126 { 00:17:18.126 "params": { 00:17:18.126 "name": "Nvme$subsystem", 00:17:18.126 "trtype": "$TEST_TRANSPORT", 00:17:18.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:18.126 "adrfam": "ipv4", 00:17:18.126 "trsvcid": "$NVMF_PORT", 00:17:18.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:18.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:18.126 "hdgst": ${hdgst:-false}, 00:17:18.126 "ddgst": ${ddgst:-false} 00:17:18.126 }, 00:17:18.126 "method": "bdev_nvme_attach_controller" 00:17:18.126 } 00:17:18.126 EOF 00:17:18.126 )") 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:18.126 19:11:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:18.126 "params": { 00:17:18.126 "name": "Nvme1", 00:17:18.126 "trtype": "tcp", 00:17:18.126 "traddr": "10.0.0.2", 00:17:18.126 "adrfam": "ipv4", 00:17:18.126 "trsvcid": "4420", 00:17:18.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.126 "hdgst": false, 00:17:18.126 "ddgst": false 00:17:18.126 }, 00:17:18.126 "method": "bdev_nvme_attach_controller" 00:17:18.126 }' 00:17:18.384 [2024-07-15 19:11:58.558919] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:18.384 [2024-07-15 19:11:58.559016] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3322059 ] 00:17:18.384 [2024-07-15 19:11:58.624320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.384 [2024-07-15 19:11:58.738640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.384 [2024-07-15 19:11:58.738690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.384 [2024-07-15 19:11:58.738694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.641 I/O targets: 00:17:18.641 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:18.641 00:17:18.641 00:17:18.641 CUnit - A unit testing framework for C - Version 2.1-3 00:17:18.641 http://cunit.sourceforge.net/ 00:17:18.641 00:17:18.641 00:17:18.641 Suite: bdevio tests on: Nvme1n1 00:17:18.641 Test: blockdev write read block ...passed 00:17:18.641 Test: blockdev write zeroes read block ...passed 00:17:18.641 Test: blockdev write zeroes read no split ...passed 00:17:18.899 Test: blockdev write zeroes read split ...passed 00:17:18.899 Test: blockdev write zeroes read split partial ...passed 00:17:18.899 Test: blockdev reset ...[2024-07-15 19:11:59.152365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.899 [2024-07-15 19:11:59.152477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211fb0 (9): Bad file descriptor 00:17:18.899 [2024-07-15 19:11:59.206197] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.899 passed 00:17:18.899 Test: blockdev write read 8 blocks ...passed 00:17:18.899 Test: blockdev write read size > 128k ...passed 00:17:18.899 Test: blockdev write read invalid size ...passed 00:17:18.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.899 Test: blockdev write read max offset ...passed 00:17:19.157 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:19.157 Test: blockdev writev readv 8 blocks ...passed 00:17:19.157 Test: blockdev writev readv 30 x 1block ...passed 00:17:19.157 Test: blockdev writev readv block ...passed 00:17:19.157 Test: blockdev writev readv size > 128k ...passed 00:17:19.157 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:19.157 Test: blockdev comparev and writev ...[2024-07-15 19:11:59.503724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.503759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.503783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.503800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.504239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.504278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.504314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.504343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.504763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.504789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.504811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.504827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.505248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.505274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.505296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.157 [2024-07-15 19:11:59.505311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:19.157 passed 00:17:19.157 Test: blockdev nvme passthru rw ...passed 00:17:19.157 Test: blockdev nvme passthru vendor specific ...[2024-07-15 19:11:59.587293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.157 [2024-07-15 19:11:59.587321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.587547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.157 [2024-07-15 19:11:59.587571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.587773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.157 [2024-07-15 19:11:59.587796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:19.157 [2024-07-15 19:11:59.588016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:19.158 [2024-07-15 19:11:59.588045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:19.158 passed 00:17:19.415 Test: blockdev nvme admin passthru ...passed 00:17:19.415 Test: blockdev copy ...passed 00:17:19.415 00:17:19.415 Run Summary: Type Total Ran Passed Failed Inactive 00:17:19.415 suites 1 1 n/a 0 0 00:17:19.415 tests 23 23 23 0 0 00:17:19.415 asserts 152 152 152 0 n/a 00:17:19.415 00:17:19.415 Elapsed time = 1.411 seconds 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:19.674 rmmod nvme_tcp 00:17:19.674 rmmod nvme_fabrics 00:17:19.674 rmmod nvme_keyring 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3321944 ']' 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3321944 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3321944 ']' 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3321944 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.674 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3321944 00:17:19.931 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:19.931 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:19.931 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3321944' 00:17:19.931 killing process with pid 3321944 00:17:19.931 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3321944 00:17:19.931 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3321944 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.190 19:12:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.722 19:12:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.722 00:17:22.722 real 0m6.537s 00:17:22.722 user 0m11.397s 00:17:22.722 sys 0m2.447s 00:17:22.722 19:12:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.722 19:12:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:22.722 ************************************ 00:17:22.722 END TEST nvmf_bdevio_no_huge 00:17:22.722 ************************************ 00:17:22.722 19:12:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.722 19:12:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.722 19:12:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.722 19:12:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.722 19:12:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.722 ************************************ 00:17:22.722 START TEST nvmf_tls 00:17:22.723 ************************************ 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:22.723 * Looking for test storage... 00:17:22.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.723 19:12:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:24.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:24.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:24.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:24.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:24.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:17:24.623 00:17:24.623 --- 10.0.0.2 ping statistics --- 00:17:24.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.623 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:17:24.623 00:17:24.623 --- 10.0.0.1 ping statistics --- 00:17:24.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.623 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3324247 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3324247 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3324247 ']' 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.623 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.624 19:12:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.624 [2024-07-15 19:12:04.785491] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:24.624 [2024-07-15 19:12:04.785573] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.624 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.624 [2024-07-15 19:12:04.863665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.624 [2024-07-15 19:12:04.982457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.624 [2024-07-15 19:12:04.982526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.624 [2024-07-15 19:12:04.982542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.624 [2024-07-15 19:12:04.982555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.624 [2024-07-15 19:12:04.982566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.624 [2024-07-15 19:12:04.982597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.624 19:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.624 19:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:24.624 19:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.624 19:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.624 19:12:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.920 19:12:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.920 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:24.920 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:24.920 true 00:17:24.920 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:24.920 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:25.177 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:25.177 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:25.177 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:25.435 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.435 19:12:05 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:25.692 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:25.692 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:25.692 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:25.950 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.950 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:26.208 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:26.208 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:26.208 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.208 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:26.465 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:26.465 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:26.465 19:12:06 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:26.723 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.723 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:26.982 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:26.982 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:26.982 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:27.239 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:27.239 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:27.496 19:12:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.uD5sC5vF5H 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.3yGSkyc3xd 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.uD5sC5vF5H 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3yGSkyc3xd 00:17:27.753 19:12:07 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:28.011 19:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:28.290 19:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.uD5sC5vF5H 00:17:28.290 19:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uD5sC5vF5H 00:17:28.290 19:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:28.548 [2024-07-15 19:12:08.775898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.548 19:12:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:28.806 19:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:29.064 [2024-07-15 19:12:09.273232] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:29.064 [2024-07-15 19:12:09.273483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.064 19:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:29.322 malloc0 00:17:29.323 19:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:29.580 19:12:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uD5sC5vF5H 00:17:29.838 [2024-07-15 19:12:10.043495] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:29.838 19:12:10 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uD5sC5vF5H 00:17:29.838 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.814 Initializing NVMe Controllers 00:17:39.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:39.814 Initialization complete. Launching workers. 00:17:39.814 ======================================================== 00:17:39.814 Latency(us) 00:17:39.814 Device Information : IOPS MiB/s Average min max 00:17:39.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7458.58 29.14 8583.67 1191.58 9468.68 00:17:39.814 ======================================================== 00:17:39.814 Total : 7458.58 29.14 8583.67 1191.58 9468.68 00:17:39.814 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uD5sC5vF5H 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uD5sC5vF5H' 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3326641 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3326641 /var/tmp/bdevperf.sock 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3326641 ']' 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.814 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.814 [2024-07-15 19:12:20.216202] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:39.814 [2024-07-15 19:12:20.216300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326641 ] 00:17:40.071 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.071 [2024-07-15 19:12:20.278121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.071 [2024-07-15 19:12:20.385778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.071 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.071 19:12:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:40.071 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uD5sC5vF5H 00:17:40.636 [2024-07-15 19:12:20.766959] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.636 [2024-07-15 19:12:20.767079] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.636 TLSTESTn1 00:17:40.636 19:12:20 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:40.636 Running I/O for 10 seconds... 00:17:52.842 00:17:52.842 Latency(us) 00:17:52.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.842 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:52.842 Verification LBA range: start 0x0 length 0x2000 00:17:52.842 TLSTESTn1 : 10.05 2178.09 8.51 0.00 0.00 58610.51 9757.58 92818.39 00:17:52.842 =================================================================================================================== 00:17:52.842 Total : 2178.09 8.51 0.00 0.00 58610.51 9757.58 92818.39 00:17:52.842 0 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3326641 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3326641 ']' 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3326641 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3326641 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3326641' 00:17:52.842 killing process with pid 3326641 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3326641 00:17:52.842 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.842 00:17:52.842 Latency(us) 00:17:52.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.842 =================================================================================================================== 00:17:52.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.842 [2024-07-15 19:12:31.100987] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3326641 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3yGSkyc3xd 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3yGSkyc3xd 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3yGSkyc3xd 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3yGSkyc3xd' 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3327960 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3327960 /var/tmp/bdevperf.sock 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3327960 ']' 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.842 [2024-07-15 19:12:31.400722] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:52.842 [2024-07-15 19:12:31.400799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327960 ] 00:17:52.842 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.842 [2024-07-15 19:12:31.457201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.842 [2024-07-15 19:12:31.559675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.842 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.843 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3yGSkyc3xd 00:17:52.843 [2024-07-15 19:12:31.948041] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.843 [2024-07-15 19:12:31.948160] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:52.843 [2024-07-15 19:12:31.953749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:52.843 [2024-07-15 19:12:31.954141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d77f90 (107): Transport endpoint is not connected 00:17:52.843 [2024-07-15 19:12:31.955129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d77f90 (9): Bad file descriptor 00:17:52.843 [2024-07-15 19:12:31.956126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:52.843 [2024-07-15 19:12:31.956148] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:52.843 [2024-07-15 19:12:31.956180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:52.843 request: 00:17:52.843 { 00:17:52.843 "name": "TLSTEST", 00:17:52.843 "trtype": "tcp", 00:17:52.843 "traddr": "10.0.0.2", 00:17:52.843 "adrfam": "ipv4", 00:17:52.843 "trsvcid": "4420", 00:17:52.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.843 "prchk_reftag": false, 00:17:52.843 "prchk_guard": false, 00:17:52.843 "hdgst": false, 00:17:52.843 "ddgst": false, 00:17:52.843 "psk": "/tmp/tmp.3yGSkyc3xd", 00:17:52.843 "method": "bdev_nvme_attach_controller", 00:17:52.843 "req_id": 1 00:17:52.843 } 00:17:52.843 Got JSON-RPC error response 00:17:52.843 response: 00:17:52.843 { 00:17:52.843 "code": -5, 00:17:52.843 "message": "Input/output error" 00:17:52.843 } 00:17:52.843 19:12:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3327960 00:17:52.843 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3327960 ']' 00:17:52.843 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3327960 00:17:52.843 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.843 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.843 19:12:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3327960 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3327960' 00:17:52.843 killing process with pid 3327960 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3327960 00:17:52.843 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.843 00:17:52.843 Latency(us) 00:17:52.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.843 =================================================================================================================== 00:17:52.843 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.843 [2024-07-15 19:12:32.008967] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3327960 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uD5sC5vF5H 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uD5sC5vF5H 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uD5sC5vF5H 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uD5sC5vF5H' 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3328010 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3328010 /var/tmp/bdevperf.sock 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3328010 ']' 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.843 [2024-07-15 19:12:32.312368] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:52.843 [2024-07-15 19:12:32.312478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328010 ] 00:17:52.843 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.843 [2024-07-15 19:12:32.374065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.843 [2024-07-15 19:12:32.480776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.uD5sC5vF5H 00:17:52.843 [2024-07-15 19:12:32.800046] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.843 [2024-07-15 19:12:32.800158] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:52.843 [2024-07-15 19:12:32.805304] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:52.843 [2024-07-15 19:12:32.805359] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:52.843 [2024-07-15 19:12:32.805415] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:52.843 [2024-07-15 19:12:32.805969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2ff90 (107): Transport endpoint is not connected 00:17:52.843 [2024-07-15 19:12:32.806957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2ff90 (9): Bad file descriptor 00:17:52.843 [2024-07-15 19:12:32.807955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:52.843 [2024-07-15 19:12:32.807976] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:52.843 [2024-07-15 19:12:32.808008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:52.843 request: 00:17:52.843 { 00:17:52.843 "name": "TLSTEST", 00:17:52.843 "trtype": "tcp", 00:17:52.843 "traddr": "10.0.0.2", 00:17:52.843 "adrfam": "ipv4", 00:17:52.843 "trsvcid": "4420", 00:17:52.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.843 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:52.843 "prchk_reftag": false, 00:17:52.843 "prchk_guard": false, 00:17:52.843 "hdgst": false, 00:17:52.843 "ddgst": false, 00:17:52.843 "psk": "/tmp/tmp.uD5sC5vF5H", 00:17:52.843 "method": "bdev_nvme_attach_controller", 00:17:52.843 "req_id": 1 00:17:52.843 } 00:17:52.843 Got JSON-RPC error response 00:17:52.843 response: 00:17:52.843 { 00:17:52.843 "code": -5, 00:17:52.843 "message": "Input/output error" 00:17:52.843 } 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3328010 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3328010 ']' 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3328010 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3328010 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3328010' 00:17:52.843 killing process with pid 3328010 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3328010 00:17:52.843 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.843 00:17:52.843 Latency(us) 00:17:52.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.843 =================================================================================================================== 00:17:52.843 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.843 [2024-07-15 19:12:32.853210] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:52.843 19:12:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3328010 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uD5sC5vF5H 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uD5sC5vF5H 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uD5sC5vF5H 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uD5sC5vF5H' 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3328116 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3328116 /var/tmp/bdevperf.sock 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3328116 ']' 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.843 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.843 [2024-07-15 19:12:33.147514] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:52.843 [2024-07-15 19:12:33.147589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328116 ] 00:17:52.843 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.843 [2024-07-15 19:12:33.205505] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.101 [2024-07-15 19:12:33.311066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.101 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.101 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.101 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uD5sC5vF5H 00:17:53.360 [2024-07-15 19:12:33.661820] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.360 [2024-07-15 19:12:33.661956] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:53.360 [2024-07-15 19:12:33.667202] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:53.360 [2024-07-15 19:12:33.667234] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:53.360 [2024-07-15 19:12:33.667274] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:53.360 [2024-07-15 19:12:33.667755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eedf90 (107): Transport endpoint is not connected 00:17:53.360 [2024-07-15 19:12:33.668744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eedf90 (9): Bad file descriptor 00:17:53.360 [2024-07-15 19:12:33.669743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:53.360 [2024-07-15 19:12:33.669763] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:53.360 [2024-07-15 19:12:33.669794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:53.360 request: 00:17:53.360 { 00:17:53.360 "name": "TLSTEST", 00:17:53.360 "trtype": "tcp", 00:17:53.360 "traddr": "10.0.0.2", 00:17:53.360 "adrfam": "ipv4", 00:17:53.360 "trsvcid": "4420", 00:17:53.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:53.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.360 "prchk_reftag": false, 00:17:53.360 "prchk_guard": false, 00:17:53.360 "hdgst": false, 00:17:53.360 "ddgst": false, 00:17:53.360 "psk": "/tmp/tmp.uD5sC5vF5H", 00:17:53.360 "method": "bdev_nvme_attach_controller", 00:17:53.360 "req_id": 1 00:17:53.360 } 00:17:53.360 Got JSON-RPC error response 00:17:53.360 response: 00:17:53.360 { 00:17:53.360 "code": -5, 00:17:53.360 "message": "Input/output error" 00:17:53.360 } 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3328116 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3328116 ']' 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3328116 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3328116 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3328116' 00:17:53.360 killing process with pid 3328116 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3328116 00:17:53.360 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.360 00:17:53.360 Latency(us) 00:17:53.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.360 =================================================================================================================== 00:17:53.360 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.360 [2024-07-15 19:12:33.721379] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.360 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3328116 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3328257 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3328257 /var/tmp/bdevperf.sock 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3328257 ']' 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.619 19:12:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.619 [2024-07-15 19:12:34.032315] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:53.619 [2024-07-15 19:12:34.032401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328257 ] 00:17:53.878 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.878 [2024-07-15 19:12:34.091566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.878 [2024-07-15 19:12:34.199524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.168 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.168 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:54.168 19:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:54.428 [2024-07-15 19:12:34.588769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:54.428 [2024-07-15 19:12:34.590190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffe770 (9): Bad file descriptor 00:17:54.428 [2024-07-15 19:12:34.591179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:54.428 [2024-07-15 19:12:34.591216] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:54.428 [2024-07-15 19:12:34.591248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:54.428 request: 00:17:54.428 { 00:17:54.428 "name": "TLSTEST", 00:17:54.428 "trtype": "tcp", 00:17:54.428 "traddr": "10.0.0.2", 00:17:54.428 "adrfam": "ipv4", 00:17:54.428 "trsvcid": "4420", 00:17:54.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.428 "prchk_reftag": false, 00:17:54.428 "prchk_guard": false, 00:17:54.428 "hdgst": false, 00:17:54.428 "ddgst": false, 00:17:54.428 "method": "bdev_nvme_attach_controller", 00:17:54.428 "req_id": 1 00:17:54.428 } 00:17:54.428 Got JSON-RPC error response 00:17:54.428 response: 00:17:54.428 { 00:17:54.428 "code": -5, 00:17:54.428 "message": "Input/output error" 00:17:54.428 } 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3328257 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3328257 ']' 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3328257 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3328257 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3328257' 00:17:54.428 killing process with pid 3328257 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3328257 00:17:54.428 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.428 00:17:54.428 Latency(us) 00:17:54.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.428 =================================================================================================================== 00:17:54.428 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.428 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3328257 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3324247 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3324247 ']' 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3324247 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3324247 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3324247' 00:17:54.687 killing process with pid 3324247 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3324247 00:17:54.687 [2024-07-15 19:12:34.919693] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:54.687 19:12:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3324247 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.8O875e69va 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.8O875e69va 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3328405 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3328405 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3328405 ']' 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.946 19:12:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.946 [2024-07-15 19:12:35.332114] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:54.946 [2024-07-15 19:12:35.332204] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.946 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.204 [2024-07-15 19:12:35.400822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.204 [2024-07-15 19:12:35.512897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.204 [2024-07-15 19:12:35.512963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.204 [2024-07-15 19:12:35.512980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.204 [2024-07-15 19:12:35.512993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.204 [2024-07-15 19:12:35.513004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.204 [2024-07-15 19:12:35.513034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.8O875e69va 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8O875e69va 00:17:56.139 19:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:56.140 [2024-07-15 19:12:36.505087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.140 19:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:56.399 19:12:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:56.658 [2024-07-15 19:12:37.010471] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.658 [2024-07-15 19:12:37.010691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.658 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:56.916 malloc0 00:17:56.916 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.175 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8O875e69va 00:17:57.433 [2024-07-15 19:12:37.824942] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8O875e69va 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8O875e69va' 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3328695 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3328695 /var/tmp/bdevperf.sock 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3328695 ']' 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.433 19:12:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.691 [2024-07-15 19:12:37.891162] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:17:57.691 [2024-07-15 19:12:37.891251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3328695 ] 00:17:57.691 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.691 [2024-07-15 19:12:37.952276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.691 [2024-07-15 19:12:38.062801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.948 19:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.948 19:12:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:57.948 19:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8O875e69va 00:17:58.206 [2024-07-15 19:12:38.397089] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.206 [2024-07-15 19:12:38.397225] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.206 TLSTESTn1 00:17:58.206 19:12:38 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.206 Running I/O for 10 seconds... 00:18:10.419 00:18:10.419 Latency(us) 00:18:10.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.419 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.419 Verification LBA range: start 0x0 length 0x2000 00:18:10.419 TLSTESTn1 : 10.06 1908.83 7.46 0.00 0.00 66861.39 7039.05 95925.29 00:18:10.419 =================================================================================================================== 00:18:10.419 Total : 1908.83 7.46 0.00 0.00 66861.39 7039.05 95925.29 00:18:10.419 0 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3328695 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3328695 ']' 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3328695 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3328695 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3328695' 00:18:10.419 killing process with pid 3328695 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3328695 00:18:10.419 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.419 00:18:10.419 Latency(us) 00:18:10.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.419 =================================================================================================================== 00:18:10.419 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.419 [2024-07-15 19:12:48.740047] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:10.419 19:12:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3328695 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.8O875e69va 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8O875e69va 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8O875e69va 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8O875e69va 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8O875e69va' 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3330011 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3330011 /var/tmp/bdevperf.sock 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3330011 ']' 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.419 [2024-07-15 19:12:49.057086] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:10.419 [2024-07-15 19:12:49.057163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330011 ] 00:18:10.419 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.419 [2024-07-15 19:12:49.114646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.419 [2024-07-15 19:12:49.217431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.419 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8O875e69va 00:18:10.419 [2024-07-15 19:12:49.554848] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.419 [2024-07-15 19:12:49.554991] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:10.419 [2024-07-15 19:12:49.555007] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.8O875e69va 00:18:10.419 request: 00:18:10.419 { 00:18:10.419 "name": "TLSTEST", 00:18:10.420 "trtype": "tcp", 00:18:10.420 "traddr": "10.0.0.2", 00:18:10.420 "adrfam": "ipv4", 00:18:10.420 "trsvcid": "4420", 00:18:10.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.420 "prchk_reftag": false, 00:18:10.420 "prchk_guard": false, 00:18:10.420 "hdgst": false, 00:18:10.420 "ddgst": false, 00:18:10.420 "psk": "/tmp/tmp.8O875e69va", 00:18:10.420 "method": "bdev_nvme_attach_controller", 00:18:10.420 "req_id": 1 00:18:10.420 } 00:18:10.420 Got JSON-RPC error response 00:18:10.420 response: 00:18:10.420 { 00:18:10.420 "code": -1, 00:18:10.420 "message": "Operation not permitted" 00:18:10.420 } 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3330011 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3330011 ']' 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3330011 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330011 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330011' 00:18:10.420 killing process with pid 3330011 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3330011 00:18:10.420 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.420 00:18:10.420 Latency(us) 00:18:10.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.420 =================================================================================================================== 00:18:10.420 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3330011 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3328405 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3328405 ']' 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3328405 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3328405 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3328405' 00:18:10.420 killing process with pid 3328405 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3328405 00:18:10.420 [2024-07-15 19:12:49.860602] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:10.420 19:12:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3328405 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3330163 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3330163 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3330163 ']' 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.420 19:12:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.420 [2024-07-15 19:12:50.209900] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:10.420 [2024-07-15 19:12:50.209993] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.420 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.420 [2024-07-15 19:12:50.277752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.420 [2024-07-15 19:12:50.389188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.420 [2024-07-15 19:12:50.389253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.420 [2024-07-15 19:12:50.389269] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.420 [2024-07-15 19:12:50.389283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.420 [2024-07-15 19:12:50.389294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.420 [2024-07-15 19:12:50.389325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.8O875e69va 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.8O875e69va 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.8O875e69va 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8O875e69va 00:18:10.986 19:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:11.244 [2024-07-15 19:12:51.440263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.244 19:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:11.500 19:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:11.756 [2024-07-15 19:12:52.013762] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.756 [2024-07-15 19:12:52.014022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.756 19:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.012 malloc0 00:18:12.012 19:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.269 19:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8O875e69va 00:18:12.527 [2024-07-15 19:12:52.844516] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:12.527 [2024-07-15 19:12:52.844561] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:12.527 [2024-07-15 19:12:52.844601] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:12.527 request: 00:18:12.527 { 00:18:12.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.527 "host": "nqn.2016-06.io.spdk:host1", 00:18:12.527 "psk": "/tmp/tmp.8O875e69va", 00:18:12.527 "method": "nvmf_subsystem_add_host", 00:18:12.527 "req_id": 1 00:18:12.527 } 00:18:12.527 Got JSON-RPC error response 00:18:12.527 response: 00:18:12.527 { 00:18:12.527 "code": -32603, 00:18:12.527 "message": "Internal error" 00:18:12.527 } 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3330163 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3330163 ']' 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3330163 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330163 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330163' 00:18:12.527 killing process with pid 3330163 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3330163 00:18:12.527 19:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3330163 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.8O875e69va 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3330587 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3330587 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3330587 ']' 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.784 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.048 [2024-07-15 19:12:53.217040] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:13.048 [2024-07-15 19:12:53.217125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.048 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.048 [2024-07-15 19:12:53.284078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.048 [2024-07-15 19:12:53.404065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.048 [2024-07-15 19:12:53.404136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.048 [2024-07-15 19:12:53.404153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.048 [2024-07-15 19:12:53.404166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.048 [2024-07-15 19:12:53.404177] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.048 [2024-07-15 19:12:53.404209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.8O875e69va 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8O875e69va 00:18:13.306 19:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:13.565 [2024-07-15 19:12:53.826855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.565 19:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:13.823 19:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:14.080 [2024-07-15 19:12:54.340301] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.080 [2024-07-15 19:12:54.340537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.080 19:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.336 malloc0 00:18:14.336 19:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:14.594 19:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8O875e69va 00:18:14.852 [2024-07-15 19:12:55.229702] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3330868 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3330868 /var/tmp/bdevperf.sock 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3330868 ']' 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.852 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.109 [2024-07-15 19:12:55.292819] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:15.109 [2024-07-15 19:12:55.292908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3330868 ] 00:18:15.109 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.109 [2024-07-15 19:12:55.349213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.109 [2024-07-15 19:12:55.454560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.366 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.366 19:12:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.366 19:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8O875e69va 00:18:15.366 [2024-07-15 19:12:55.789493] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.366 [2024-07-15 19:12:55.789616] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.623 TLSTESTn1 00:18:15.623 19:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:15.880 19:12:56 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:15.880 "subsystems": [ 00:18:15.880 { 00:18:15.880 "subsystem": "keyring", 00:18:15.880 "config": [] 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "subsystem": "iobuf", 00:18:15.880 "config": [ 00:18:15.880 { 00:18:15.880 "method": "iobuf_set_options", 00:18:15.880 "params": { 00:18:15.880 "small_pool_count": 8192, 00:18:15.880 "large_pool_count": 1024, 00:18:15.880 "small_bufsize": 8192, 00:18:15.880 "large_bufsize": 135168 00:18:15.880 } 00:18:15.880 } 00:18:15.880 ] 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "subsystem": "sock", 00:18:15.880 "config": [ 00:18:15.880 { 00:18:15.880 "method": "sock_set_default_impl", 00:18:15.880 "params": { 00:18:15.880 "impl_name": "posix" 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "sock_impl_set_options", 00:18:15.880 "params": { 00:18:15.880 "impl_name": "ssl", 00:18:15.880 "recv_buf_size": 4096, 00:18:15.880 "send_buf_size": 4096, 00:18:15.880 "enable_recv_pipe": true, 00:18:15.880 "enable_quickack": false, 00:18:15.880 "enable_placement_id": 0, 00:18:15.880 "enable_zerocopy_send_server": true, 00:18:15.880 "enable_zerocopy_send_client": false, 00:18:15.880 "zerocopy_threshold": 0, 00:18:15.880 "tls_version": 0, 00:18:15.880 "enable_ktls": false 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "sock_impl_set_options", 00:18:15.880 "params": { 00:18:15.880 "impl_name": "posix", 00:18:15.880 "recv_buf_size": 2097152, 00:18:15.880 "send_buf_size": 2097152, 00:18:15.880 "enable_recv_pipe": true, 00:18:15.880 "enable_quickack": false, 00:18:15.880 "enable_placement_id": 0, 00:18:15.880 "enable_zerocopy_send_server": true, 00:18:15.880 "enable_zerocopy_send_client": false, 00:18:15.880 "zerocopy_threshold": 0, 00:18:15.880 "tls_version": 0, 00:18:15.880 "enable_ktls": false 00:18:15.880 } 00:18:15.880 } 00:18:15.880 ] 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "subsystem": "vmd", 00:18:15.880 "config": [] 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "subsystem": "accel", 00:18:15.880 "config": [ 00:18:15.880 { 00:18:15.880 "method": "accel_set_options", 00:18:15.880 "params": { 00:18:15.880 "small_cache_size": 128, 00:18:15.880 "large_cache_size": 16, 00:18:15.880 "task_count": 2048, 00:18:15.880 "sequence_count": 2048, 00:18:15.880 "buf_count": 2048 00:18:15.880 } 00:18:15.880 } 00:18:15.880 ] 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "subsystem": "bdev", 00:18:15.880 "config": [ 00:18:15.880 { 00:18:15.880 "method": "bdev_set_options", 00:18:15.880 "params": { 00:18:15.880 "bdev_io_pool_size": 65535, 00:18:15.880 "bdev_io_cache_size": 256, 00:18:15.880 "bdev_auto_examine": true, 00:18:15.880 "iobuf_small_cache_size": 128, 00:18:15.880 "iobuf_large_cache_size": 16 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "bdev_raid_set_options", 00:18:15.880 "params": { 00:18:15.880 "process_window_size_kb": 1024 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "bdev_iscsi_set_options", 00:18:15.880 "params": { 00:18:15.880 "timeout_sec": 30 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "bdev_nvme_set_options", 00:18:15.880 "params": { 00:18:15.880 "action_on_timeout": "none", 00:18:15.880 "timeout_us": 0, 00:18:15.880 "timeout_admin_us": 0, 00:18:15.880 "keep_alive_timeout_ms": 10000, 00:18:15.880 "arbitration_burst": 0, 00:18:15.880 "low_priority_weight": 0, 00:18:15.880 "medium_priority_weight": 0, 00:18:15.880 "high_priority_weight": 0, 00:18:15.880 "nvme_adminq_poll_period_us": 10000, 00:18:15.880 "nvme_ioq_poll_period_us": 0, 00:18:15.880 "io_queue_requests": 0, 00:18:15.880 "delay_cmd_submit": true, 00:18:15.880 "transport_retry_count": 4, 00:18:15.880 "bdev_retry_count": 3, 00:18:15.880 "transport_ack_timeout": 0, 00:18:15.880 "ctrlr_loss_timeout_sec": 0, 00:18:15.880 "reconnect_delay_sec": 0, 00:18:15.880 "fast_io_fail_timeout_sec": 0, 00:18:15.880 "disable_auto_failback": false, 00:18:15.880 "generate_uuids": false, 00:18:15.880 "transport_tos": 0, 00:18:15.880 "nvme_error_stat": false, 00:18:15.880 "rdma_srq_size": 0, 00:18:15.880 "io_path_stat": false, 00:18:15.880 "allow_accel_sequence": false, 00:18:15.880 "rdma_max_cq_size": 0, 00:18:15.880 "rdma_cm_event_timeout_ms": 0, 00:18:15.880 "dhchap_digests": [ 00:18:15.880 "sha256", 00:18:15.880 "sha384", 00:18:15.880 "sha512" 00:18:15.880 ], 00:18:15.880 "dhchap_dhgroups": [ 00:18:15.880 "null", 00:18:15.880 "ffdhe2048", 00:18:15.880 "ffdhe3072", 00:18:15.880 "ffdhe4096", 00:18:15.880 "ffdhe6144", 00:18:15.880 "ffdhe8192" 00:18:15.880 ] 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "bdev_nvme_set_hotplug", 00:18:15.880 "params": { 00:18:15.880 "period_us": 100000, 00:18:15.880 "enable": false 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "bdev_malloc_create", 00:18:15.880 "params": { 00:18:15.880 "name": "malloc0", 00:18:15.880 "num_blocks": 8192, 00:18:15.880 "block_size": 4096, 00:18:15.880 "physical_block_size": 4096, 00:18:15.880 "uuid": "f8d408af-d388-4f0c-90d5-22d100992d06", 00:18:15.880 "optimal_io_boundary": 0 00:18:15.880 } 00:18:15.880 }, 00:18:15.880 { 00:18:15.880 "method": "bdev_wait_for_examine" 00:18:15.880 } 00:18:15.880 ] 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "subsystem": "nbd", 00:18:15.881 "config": [] 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "subsystem": "scheduler", 00:18:15.881 "config": [ 00:18:15.881 { 00:18:15.881 "method": "framework_set_scheduler", 00:18:15.881 "params": { 00:18:15.881 "name": "static" 00:18:15.881 } 00:18:15.881 } 00:18:15.881 ] 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "subsystem": "nvmf", 00:18:15.881 "config": [ 00:18:15.881 { 00:18:15.881 "method": "nvmf_set_config", 00:18:15.881 "params": { 00:18:15.881 "discovery_filter": "match_any", 00:18:15.881 "admin_cmd_passthru": { 00:18:15.881 "identify_ctrlr": false 00:18:15.881 } 00:18:15.881 } 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "method": "nvmf_set_max_subsystems", 00:18:15.881 "params": { 00:18:15.881 "max_subsystems": 1024 00:18:15.881 } 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "method": "nvmf_set_crdt", 00:18:15.881 "params": { 00:18:15.881 "crdt1": 0, 00:18:15.881 "crdt2": 0, 00:18:15.881 "crdt3": 0 00:18:15.881 } 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "method": "nvmf_create_transport", 00:18:15.881 "params": { 00:18:15.881 "trtype": "TCP", 00:18:15.881 "max_queue_depth": 128, 00:18:15.881 "max_io_qpairs_per_ctrlr": 127, 00:18:15.881 "in_capsule_data_size": 4096, 00:18:15.881 "max_io_size": 131072, 00:18:15.881 "io_unit_size": 131072, 00:18:15.881 "max_aq_depth": 128, 00:18:15.881 "num_shared_buffers": 511, 00:18:15.881 "buf_cache_size": 4294967295, 00:18:15.881 "dif_insert_or_strip": false, 00:18:15.881 "zcopy": false, 00:18:15.881 "c2h_success": false, 00:18:15.881 "sock_priority": 0, 00:18:15.881 "abort_timeout_sec": 1, 00:18:15.881 "ack_timeout": 0, 00:18:15.881 "data_wr_pool_size": 0 00:18:15.881 } 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "method": "nvmf_create_subsystem", 00:18:15.881 "params": { 00:18:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.881 "allow_any_host": false, 00:18:15.881 "serial_number": "SPDK00000000000001", 00:18:15.881 "model_number": "SPDK bdev Controller", 00:18:15.881 "max_namespaces": 10, 00:18:15.881 "min_cntlid": 1, 00:18:15.881 "max_cntlid": 65519, 00:18:15.881 "ana_reporting": false 00:18:15.881 } 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "method": "nvmf_subsystem_add_host", 00:18:15.881 "params": { 00:18:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.881 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.881 "psk": "/tmp/tmp.8O875e69va" 00:18:15.881 } 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "method": "nvmf_subsystem_add_ns", 00:18:15.881 "params": { 00:18:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.881 "namespace": { 00:18:15.881 "nsid": 1, 00:18:15.881 "bdev_name": "malloc0", 00:18:15.881 "nguid": "F8D408AFD3884F0C90D522D100992D06", 00:18:15.881 "uuid": "f8d408af-d388-4f0c-90d5-22d100992d06", 00:18:15.881 "no_auto_visible": false 00:18:15.881 } 00:18:15.881 } 00:18:15.881 }, 00:18:15.881 { 00:18:15.881 "method": "nvmf_subsystem_add_listener", 00:18:15.881 "params": { 00:18:15.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.881 "listen_address": { 00:18:15.881 "trtype": "TCP", 00:18:15.881 "adrfam": "IPv4", 00:18:15.881 "traddr": "10.0.0.2", 00:18:15.881 "trsvcid": "4420" 00:18:15.881 }, 00:18:15.881 "secure_channel": true 00:18:15.881 } 00:18:15.881 } 00:18:15.881 ] 00:18:15.881 } 00:18:15.881 ] 00:18:15.881 }' 00:18:15.881 19:12:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:16.138 "subsystems": [ 00:18:16.138 { 00:18:16.138 "subsystem": "keyring", 00:18:16.138 "config": [] 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "subsystem": "iobuf", 00:18:16.138 "config": [ 00:18:16.138 { 00:18:16.138 "method": "iobuf_set_options", 00:18:16.138 "params": { 00:18:16.138 "small_pool_count": 8192, 00:18:16.138 "large_pool_count": 1024, 00:18:16.138 "small_bufsize": 8192, 00:18:16.138 "large_bufsize": 135168 00:18:16.138 } 00:18:16.138 } 00:18:16.138 ] 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "subsystem": "sock", 00:18:16.138 "config": [ 00:18:16.138 { 00:18:16.138 "method": "sock_set_default_impl", 00:18:16.138 "params": { 00:18:16.138 "impl_name": "posix" 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "sock_impl_set_options", 00:18:16.138 "params": { 00:18:16.138 "impl_name": "ssl", 00:18:16.138 "recv_buf_size": 4096, 00:18:16.138 "send_buf_size": 4096, 00:18:16.138 "enable_recv_pipe": true, 00:18:16.138 "enable_quickack": false, 00:18:16.138 "enable_placement_id": 0, 00:18:16.138 "enable_zerocopy_send_server": true, 00:18:16.138 "enable_zerocopy_send_client": false, 00:18:16.138 "zerocopy_threshold": 0, 00:18:16.138 "tls_version": 0, 00:18:16.138 "enable_ktls": false 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "sock_impl_set_options", 00:18:16.138 "params": { 00:18:16.138 "impl_name": "posix", 00:18:16.138 "recv_buf_size": 2097152, 00:18:16.138 "send_buf_size": 2097152, 00:18:16.138 "enable_recv_pipe": true, 00:18:16.138 "enable_quickack": false, 00:18:16.138 "enable_placement_id": 0, 00:18:16.138 "enable_zerocopy_send_server": true, 00:18:16.138 "enable_zerocopy_send_client": false, 00:18:16.138 "zerocopy_threshold": 0, 00:18:16.138 "tls_version": 0, 00:18:16.138 "enable_ktls": false 00:18:16.138 } 00:18:16.138 } 00:18:16.138 ] 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "subsystem": "vmd", 00:18:16.138 "config": [] 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "subsystem": "accel", 00:18:16.138 "config": [ 00:18:16.138 { 00:18:16.138 "method": "accel_set_options", 00:18:16.138 "params": { 00:18:16.138 "small_cache_size": 128, 00:18:16.138 "large_cache_size": 16, 00:18:16.138 "task_count": 2048, 00:18:16.138 "sequence_count": 2048, 00:18:16.138 "buf_count": 2048 00:18:16.138 } 00:18:16.138 } 00:18:16.138 ] 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "subsystem": "bdev", 00:18:16.138 "config": [ 00:18:16.138 { 00:18:16.138 "method": "bdev_set_options", 00:18:16.138 "params": { 00:18:16.138 "bdev_io_pool_size": 65535, 00:18:16.138 "bdev_io_cache_size": 256, 00:18:16.138 "bdev_auto_examine": true, 00:18:16.138 "iobuf_small_cache_size": 128, 00:18:16.138 "iobuf_large_cache_size": 16 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "bdev_raid_set_options", 00:18:16.138 "params": { 00:18:16.138 "process_window_size_kb": 1024 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "bdev_iscsi_set_options", 00:18:16.138 "params": { 00:18:16.138 "timeout_sec": 30 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "bdev_nvme_set_options", 00:18:16.138 "params": { 00:18:16.138 "action_on_timeout": "none", 00:18:16.138 "timeout_us": 0, 00:18:16.138 "timeout_admin_us": 0, 00:18:16.138 "keep_alive_timeout_ms": 10000, 00:18:16.138 "arbitration_burst": 0, 00:18:16.138 "low_priority_weight": 0, 00:18:16.138 "medium_priority_weight": 0, 00:18:16.138 "high_priority_weight": 0, 00:18:16.138 "nvme_adminq_poll_period_us": 10000, 00:18:16.138 "nvme_ioq_poll_period_us": 0, 00:18:16.138 "io_queue_requests": 512, 00:18:16.138 "delay_cmd_submit": true, 00:18:16.138 "transport_retry_count": 4, 00:18:16.138 "bdev_retry_count": 3, 00:18:16.138 "transport_ack_timeout": 0, 00:18:16.138 "ctrlr_loss_timeout_sec": 0, 00:18:16.138 "reconnect_delay_sec": 0, 00:18:16.138 "fast_io_fail_timeout_sec": 0, 00:18:16.138 "disable_auto_failback": false, 00:18:16.138 "generate_uuids": false, 00:18:16.138 "transport_tos": 0, 00:18:16.138 "nvme_error_stat": false, 00:18:16.138 "rdma_srq_size": 0, 00:18:16.138 "io_path_stat": false, 00:18:16.138 "allow_accel_sequence": false, 00:18:16.138 "rdma_max_cq_size": 0, 00:18:16.138 "rdma_cm_event_timeout_ms": 0, 00:18:16.138 "dhchap_digests": [ 00:18:16.138 "sha256", 00:18:16.138 "sha384", 00:18:16.138 "sha512" 00:18:16.138 ], 00:18:16.138 "dhchap_dhgroups": [ 00:18:16.138 "null", 00:18:16.138 "ffdhe2048", 00:18:16.138 "ffdhe3072", 00:18:16.138 "ffdhe4096", 00:18:16.138 "ffdhe6144", 00:18:16.138 "ffdhe8192" 00:18:16.138 ] 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "bdev_nvme_attach_controller", 00:18:16.138 "params": { 00:18:16.138 "name": "TLSTEST", 00:18:16.138 "trtype": "TCP", 00:18:16.138 "adrfam": "IPv4", 00:18:16.138 "traddr": "10.0.0.2", 00:18:16.138 "trsvcid": "4420", 00:18:16.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.138 "prchk_reftag": false, 00:18:16.138 "prchk_guard": false, 00:18:16.138 "ctrlr_loss_timeout_sec": 0, 00:18:16.138 "reconnect_delay_sec": 0, 00:18:16.138 "fast_io_fail_timeout_sec": 0, 00:18:16.138 "psk": "/tmp/tmp.8O875e69va", 00:18:16.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.138 "hdgst": false, 00:18:16.138 "ddgst": false 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "bdev_nvme_set_hotplug", 00:18:16.138 "params": { 00:18:16.138 "period_us": 100000, 00:18:16.138 "enable": false 00:18:16.138 } 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "method": "bdev_wait_for_examine" 00:18:16.138 } 00:18:16.138 ] 00:18:16.138 }, 00:18:16.138 { 00:18:16.138 "subsystem": "nbd", 00:18:16.138 "config": [] 00:18:16.138 } 00:18:16.138 ] 00:18:16.138 }' 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3330868 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3330868 ']' 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3330868 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330868 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330868' 00:18:16.138 killing process with pid 3330868 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3330868 00:18:16.138 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.138 00:18:16.138 Latency(us) 00:18:16.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.138 =================================================================================================================== 00:18:16.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.138 [2024-07-15 19:12:56.533036] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:16.138 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3330868 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3330587 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3330587 ']' 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3330587 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330587 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330587' 00:18:16.395 killing process with pid 3330587 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3330587 00:18:16.395 [2024-07-15 19:12:56.826124] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:16.395 19:12:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3330587 00:18:16.984 19:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:16.984 19:12:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.984 19:12:57 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:16.984 "subsystems": [ 00:18:16.984 { 00:18:16.984 "subsystem": "keyring", 00:18:16.984 "config": [] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "iobuf", 00:18:16.984 "config": [ 00:18:16.984 { 00:18:16.984 "method": "iobuf_set_options", 00:18:16.984 "params": { 00:18:16.984 "small_pool_count": 8192, 00:18:16.984 "large_pool_count": 1024, 00:18:16.984 "small_bufsize": 8192, 00:18:16.984 "large_bufsize": 135168 00:18:16.984 } 00:18:16.984 } 00:18:16.984 ] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "sock", 00:18:16.984 "config": [ 00:18:16.984 { 00:18:16.984 "method": "sock_set_default_impl", 00:18:16.984 "params": { 00:18:16.984 "impl_name": "posix" 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "sock_impl_set_options", 00:18:16.984 "params": { 00:18:16.984 "impl_name": "ssl", 00:18:16.984 "recv_buf_size": 4096, 00:18:16.984 "send_buf_size": 4096, 00:18:16.984 "enable_recv_pipe": true, 00:18:16.984 "enable_quickack": false, 00:18:16.984 "enable_placement_id": 0, 00:18:16.984 "enable_zerocopy_send_server": true, 00:18:16.984 "enable_zerocopy_send_client": false, 00:18:16.984 "zerocopy_threshold": 0, 00:18:16.984 "tls_version": 0, 00:18:16.984 "enable_ktls": false 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "sock_impl_set_options", 00:18:16.984 "params": { 00:18:16.984 "impl_name": "posix", 00:18:16.984 "recv_buf_size": 2097152, 00:18:16.984 "send_buf_size": 2097152, 00:18:16.984 "enable_recv_pipe": true, 00:18:16.984 "enable_quickack": false, 00:18:16.984 "enable_placement_id": 0, 00:18:16.984 "enable_zerocopy_send_server": true, 00:18:16.984 "enable_zerocopy_send_client": false, 00:18:16.984 "zerocopy_threshold": 0, 00:18:16.984 "tls_version": 0, 00:18:16.984 "enable_ktls": false 00:18:16.984 } 00:18:16.984 } 00:18:16.984 ] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "vmd", 00:18:16.984 "config": [] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "accel", 00:18:16.984 "config": [ 00:18:16.984 { 00:18:16.984 "method": "accel_set_options", 00:18:16.984 "params": { 00:18:16.984 "small_cache_size": 128, 00:18:16.984 "large_cache_size": 16, 00:18:16.984 "task_count": 2048, 00:18:16.984 "sequence_count": 2048, 00:18:16.984 "buf_count": 2048 00:18:16.984 } 00:18:16.984 } 00:18:16.984 ] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "bdev", 00:18:16.984 "config": [ 00:18:16.984 { 00:18:16.984 "method": "bdev_set_options", 00:18:16.984 "params": { 00:18:16.984 "bdev_io_pool_size": 65535, 00:18:16.984 "bdev_io_cache_size": 256, 00:18:16.984 "bdev_auto_examine": true, 00:18:16.984 "iobuf_small_cache_size": 128, 00:18:16.984 "iobuf_large_cache_size": 16 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "bdev_raid_set_options", 00:18:16.984 "params": { 00:18:16.984 "process_window_size_kb": 1024 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "bdev_iscsi_set_options", 00:18:16.984 "params": { 00:18:16.984 "timeout_sec": 30 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "bdev_nvme_set_options", 00:18:16.984 "params": { 00:18:16.984 "action_on_timeout": "none", 00:18:16.984 "timeout_us": 0, 00:18:16.984 "timeout_admin_us": 0, 00:18:16.984 "keep_alive_timeout_ms": 10000, 00:18:16.984 "arbitration_burst": 0, 00:18:16.984 "low_priority_weight": 0, 00:18:16.984 "medium_priority_weight": 0, 00:18:16.984 "high_priority_weight": 0, 00:18:16.984 "nvme_adminq_poll_period_us": 10000, 00:18:16.984 "nvme_ioq_poll_period_us": 0, 00:18:16.984 "io_queue_requests": 0, 00:18:16.984 "delay_cmd_submit": true, 00:18:16.984 "transport_retry_count": 4, 00:18:16.984 "bdev_retry_count": 3, 00:18:16.984 "transport_ack_timeout": 0, 00:18:16.984 "ctrlr_loss_timeout_sec": 0, 00:18:16.984 "reconnect_delay_sec": 0, 00:18:16.984 "fast_io_fail_timeout_sec": 0, 00:18:16.984 "disable_auto_failback": false, 00:18:16.984 "generate_uuids": false, 00:18:16.984 "transport_tos": 0, 00:18:16.984 "nvme_error_stat": false, 00:18:16.984 "rdma_srq_size": 0, 00:18:16.984 "io_path_stat": false, 00:18:16.984 "allow_accel_sequence": false, 00:18:16.984 "rdma_max_cq_size": 0, 00:18:16.984 "rdma_cm_event_timeout_ms": 0, 00:18:16.984 "dhchap_digests": [ 00:18:16.984 "sha256", 00:18:16.984 "sha384", 00:18:16.984 "sha512" 00:18:16.984 ], 00:18:16.984 "dhchap_dhgroups": [ 00:18:16.984 "null", 00:18:16.984 "ffdhe2048", 00:18:16.984 "ffdhe3072", 00:18:16.984 "ffdhe4096", 00:18:16.984 "ffdhe6144", 00:18:16.984 "ffdhe8192" 00:18:16.984 ] 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "bdev_nvme_set_hotplug", 00:18:16.984 "params": { 00:18:16.984 "period_us": 100000, 00:18:16.984 "enable": false 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "bdev_malloc_create", 00:18:16.984 "params": { 00:18:16.984 "name": "malloc0", 00:18:16.984 "num_blocks": 8192, 00:18:16.984 "block_size": 4096, 00:18:16.984 "physical_block_size": 4096, 00:18:16.984 "uuid": "f8d408af-d388-4f0c-90d5-22d100992d06", 00:18:16.984 "optimal_io_boundary": 0 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "bdev_wait_for_examine" 00:18:16.984 } 00:18:16.984 ] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "nbd", 00:18:16.984 "config": [] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "scheduler", 00:18:16.984 "config": [ 00:18:16.984 { 00:18:16.984 "method": "framework_set_scheduler", 00:18:16.984 "params": { 00:18:16.984 "name": "static" 00:18:16.984 } 00:18:16.984 } 00:18:16.984 ] 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "subsystem": "nvmf", 00:18:16.984 "config": [ 00:18:16.984 { 00:18:16.984 "method": "nvmf_set_config", 00:18:16.984 "params": { 00:18:16.984 "discovery_filter": "match_any", 00:18:16.984 "admin_cmd_passthru": { 00:18:16.984 "identify_ctrlr": false 00:18:16.984 } 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "nvmf_set_max_subsystems", 00:18:16.984 "params": { 00:18:16.984 "max_subsystems": 1024 00:18:16.984 } 00:18:16.984 }, 00:18:16.984 { 00:18:16.984 "method": "nvmf_set_crdt", 00:18:16.984 "params": { 00:18:16.985 "crdt1": 0, 00:18:16.985 "crdt2": 0, 00:18:16.985 "crdt3": 0 00:18:16.985 } 00:18:16.985 }, 00:18:16.985 { 00:18:16.985 "method": "nvmf_create_transport", 00:18:16.985 "params": { 00:18:16.985 "trtype": "TCP", 00:18:16.985 "max_queue_depth": 128, 00:18:16.985 "max_io_qpairs_per_ctrlr": 127, 00:18:16.985 "in_capsule_data_size": 4096, 00:18:16.985 "max_io_size": 131072, 00:18:16.985 "io_unit_size": 131072, 00:18:16.985 "max_aq_depth": 128, 00:18:16.985 "num_shared_buffers": 511, 00:18:16.985 "buf_cache_size": 4294967295, 00:18:16.985 "dif_insert_or_strip": false, 00:18:16.985 "zcopy": false, 00:18:16.985 "c2h_success": false, 00:18:16.985 "sock_priority": 0, 00:18:16.985 "abort_timeout_sec": 1, 00:18:16.985 "ack_timeout": 0, 00:18:16.985 "data_wr_pool_size": 0 00:18:16.985 } 00:18:16.985 }, 00:18:16.985 { 00:18:16.985 "method": "nvmf_create_subsystem", 00:18:16.985 "params": { 00:18:16.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.985 "allow_any_host": false, 00:18:16.985 "serial_number": "SPDK00000000000001", 00:18:16.985 "model_number": "SPDK bdev Controller", 00:18:16.985 "max_namespaces": 10, 00:18:16.985 "min_cntlid": 1, 00:18:16.985 "max_cntlid": 65519, 00:18:16.985 "ana_reporting": false 00:18:16.985 } 00:18:16.985 }, 00:18:16.985 { 00:18:16.985 "method": "nvmf_subsystem_add_host", 00:18:16.985 "params": { 00:18:16.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.985 "host": "nqn.2016-06.io.spdk:host1", 00:18:16.985 "psk": "/tmp/tmp.8O875e69va" 00:18:16.985 } 00:18:16.985 }, 00:18:16.985 { 00:18:16.985 "method": "nvmf_subsystem_add_ns", 00:18:16.985 "params": { 00:18:16.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.985 "namespace": { 00:18:16.985 "nsid": 1, 00:18:16.985 "bdev_name": "malloc0", 00:18:16.985 "nguid": "F8D408AFD3884F0C90D522D100992D06", 00:18:16.985 "uuid": "f8d408af-d388-4f0c-90d5-22d100992d06", 00:18:16.985 "no_auto_visible": false 00:18:16.985 } 00:18:16.985 } 00:18:16.985 }, 00:18:16.985 { 00:18:16.985 "method": "nvmf_subsystem_add_listener", 00:18:16.985 "params": { 00:18:16.985 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.985 "listen_address": { 00:18:16.985 "trtype": "TCP", 00:18:16.985 "adrfam": "IPv4", 00:18:16.985 "traddr": "10.0.0.2", 00:18:16.985 "trsvcid": "4420" 00:18:16.985 }, 00:18:16.985 "secure_channel": true 00:18:16.985 } 00:18:16.985 } 00:18:16.985 ] 00:18:16.985 } 00:18:16.985 ] 00:18:16.985 }' 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3331024 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3331024 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3331024 ']' 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.985 19:12:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.985 [2024-07-15 19:12:57.170956] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:16.985 [2024-07-15 19:12:57.171040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.985 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.985 [2024-07-15 19:12:57.241735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.985 [2024-07-15 19:12:57.360148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.985 [2024-07-15 19:12:57.360213] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.985 [2024-07-15 19:12:57.360229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.985 [2024-07-15 19:12:57.360243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.985 [2024-07-15 19:12:57.360254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.985 [2024-07-15 19:12:57.360341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.242 [2024-07-15 19:12:57.601530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.242 [2024-07-15 19:12:57.617488] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:17.242 [2024-07-15 19:12:57.633510] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.242 [2024-07-15 19:12:57.641031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3331186 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3331186 /var/tmp/bdevperf.sock 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3331186 ']' 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:17.806 19:12:58 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:17.806 "subsystems": [ 00:18:17.806 { 00:18:17.806 "subsystem": "keyring", 00:18:17.806 "config": [] 00:18:17.806 }, 00:18:17.806 { 00:18:17.806 "subsystem": "iobuf", 00:18:17.806 "config": [ 00:18:17.806 { 00:18:17.806 "method": "iobuf_set_options", 00:18:17.806 "params": { 00:18:17.806 "small_pool_count": 8192, 00:18:17.806 "large_pool_count": 1024, 00:18:17.806 "small_bufsize": 8192, 00:18:17.806 "large_bufsize": 135168 00:18:17.806 } 00:18:17.806 } 00:18:17.806 ] 00:18:17.806 }, 00:18:17.806 { 00:18:17.806 "subsystem": "sock", 00:18:17.806 "config": [ 00:18:17.806 { 00:18:17.806 "method": "sock_set_default_impl", 00:18:17.806 "params": { 00:18:17.806 "impl_name": "posix" 00:18:17.806 } 00:18:17.806 }, 00:18:17.806 { 00:18:17.806 "method": "sock_impl_set_options", 00:18:17.806 "params": { 00:18:17.806 "impl_name": "ssl", 00:18:17.806 "recv_buf_size": 4096, 00:18:17.806 "send_buf_size": 4096, 00:18:17.806 "enable_recv_pipe": true, 00:18:17.806 "enable_quickack": false, 00:18:17.806 "enable_placement_id": 0, 00:18:17.806 "enable_zerocopy_send_server": true, 00:18:17.806 "enable_zerocopy_send_client": false, 00:18:17.806 "zerocopy_threshold": 0, 00:18:17.806 "tls_version": 0, 00:18:17.806 "enable_ktls": false 00:18:17.806 } 00:18:17.806 }, 00:18:17.806 { 00:18:17.806 "method": "sock_impl_set_options", 00:18:17.806 "params": { 00:18:17.806 "impl_name": "posix", 00:18:17.806 "recv_buf_size": 2097152, 00:18:17.806 "send_buf_size": 2097152, 00:18:17.806 "enable_recv_pipe": true, 00:18:17.807 "enable_quickack": false, 00:18:17.807 "enable_placement_id": 0, 00:18:17.807 "enable_zerocopy_send_server": true, 00:18:17.807 "enable_zerocopy_send_client": false, 00:18:17.807 "zerocopy_threshold": 0, 00:18:17.807 "tls_version": 0, 00:18:17.807 "enable_ktls": false 00:18:17.807 } 00:18:17.807 } 00:18:17.807 ] 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "subsystem": "vmd", 00:18:17.807 "config": [] 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "subsystem": "accel", 00:18:17.807 "config": [ 00:18:17.807 { 00:18:17.807 "method": "accel_set_options", 00:18:17.807 "params": { 00:18:17.807 "small_cache_size": 128, 00:18:17.807 "large_cache_size": 16, 00:18:17.807 "task_count": 2048, 00:18:17.807 "sequence_count": 2048, 00:18:17.807 "buf_count": 2048 00:18:17.807 } 00:18:17.807 } 00:18:17.807 ] 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "subsystem": "bdev", 00:18:17.807 "config": [ 00:18:17.807 { 00:18:17.807 "method": "bdev_set_options", 00:18:17.807 "params": { 00:18:17.807 "bdev_io_pool_size": 65535, 00:18:17.807 "bdev_io_cache_size": 256, 00:18:17.807 "bdev_auto_examine": true, 00:18:17.807 "iobuf_small_cache_size": 128, 00:18:17.807 "iobuf_large_cache_size": 16 00:18:17.807 } 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "method": "bdev_raid_set_options", 00:18:17.807 "params": { 00:18:17.807 "process_window_size_kb": 1024 00:18:17.807 } 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "method": "bdev_iscsi_set_options", 00:18:17.807 "params": { 00:18:17.807 "timeout_sec": 30 00:18:17.807 } 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "method": "bdev_nvme_set_options", 00:18:17.807 "params": { 00:18:17.807 "action_on_timeout": "none", 00:18:17.807 "timeout_us": 0, 00:18:17.807 "timeout_admin_us": 0, 00:18:17.807 "keep_alive_timeout_ms": 10000, 00:18:17.807 "arbitration_burst": 0, 00:18:17.807 "low_priority_weight": 0, 00:18:17.807 "medium_priority_weight": 0, 00:18:17.807 "high_priority_weight": 0, 00:18:17.807 "nvme_adminq_poll_period_us": 10000, 00:18:17.807 "nvme_ioq_poll_period_us": 0, 00:18:17.807 "io_queue_requests": 512, 00:18:17.807 "delay_cmd_submit": true, 00:18:17.807 "transport_retry_count": 4, 00:18:17.807 "bdev_retry_count": 3, 00:18:17.807 "transport_ack_timeout": 0, 00:18:17.807 "ctrlr_loss_timeout_sec": 0, 00:18:17.807 "reconnect_delay_sec": 0, 00:18:17.807 "fast_io_fail_timeout_sec": 0, 00:18:17.807 "disable_auto_failback": false, 00:18:17.807 "generate_uuids": false, 00:18:17.807 "transport_tos": 0, 00:18:17.807 "nvme_error_stat": false, 00:18:17.807 "rdma_srq_size": 0, 00:18:17.807 "io_path_stat": false, 00:18:17.807 "allow_accel_sequence": false, 00:18:17.807 "rdma_max_cq_size": 0, 00:18:17.807 "rdma_cm_event_timeout_ms": 0, 00:18:17.807 "dhchap_digests": [ 00:18:17.807 "sha256", 00:18:17.807 "sha384", 00:18:17.807 "sha512" 00:18:17.807 ], 00:18:17.807 "dhchap_dhgroups": [ 00:18:17.807 "null", 00:18:17.807 "ffdhe2048", 00:18:17.807 "ffdhe3072", 00:18:17.807 "ffdhe4096", 00:18:17.807 "ffdhe6144", 00:18:17.807 "ffdhe8192" 00:18:17.807 ] 00:18:17.807 } 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "method": "bdev_nvme_attach_controller", 00:18:17.807 "params": { 00:18:17.807 "name": "TLSTEST", 00:18:17.807 "trtype": "TCP", 00:18:17.807 "adrfam": "IPv4", 00:18:17.807 "traddr": "10.0.0.2", 00:18:17.807 "trsvcid": "4420", 00:18:17.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.807 "prchk_reftag": false, 00:18:17.807 "prchk_guard": false, 00:18:17.807 "ctrlr_loss_timeout_sec": 0, 00:18:17.807 "reconnect_delay_sec": 0, 00:18:17.807 "fast_io_fail_timeout_sec": 0, 00:18:17.807 "psk": "/tmp/tmp.8O875e69va", 00:18:17.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.807 "hdgst": false, 00:18:17.807 "ddgst": false 00:18:17.807 } 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "method": "bdev_nvme_set_hotplug", 00:18:17.807 "params": { 00:18:17.807 "period_us": 100000, 00:18:17.807 "enable": false 00:18:17.807 } 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "method": "bdev_wait_for_examine" 00:18:17.807 } 00:18:17.807 ] 00:18:17.807 }, 00:18:17.807 { 00:18:17.807 "subsystem": "nbd", 00:18:17.807 "config": [] 00:18:17.807 } 00:18:17.807 ] 00:18:17.808 }' 00:18:18.065 [2024-07-15 19:12:58.240968] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:18.065 [2024-07-15 19:12:58.241061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331186 ] 00:18:18.065 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.065 [2024-07-15 19:12:58.298622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.065 [2024-07-15 19:12:58.406513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.322 [2024-07-15 19:12:58.576532] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.322 [2024-07-15 19:12:58.576676] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.886 19:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.886 19:12:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.886 19:12:59 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:19.143 Running I/O for 10 seconds... 00:18:29.142 00:18:29.142 Latency(us) 00:18:29.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.142 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.142 Verification LBA range: start 0x0 length 0x2000 00:18:29.142 TLSTESTn1 : 10.07 1234.72 4.82 0.00 0.00 103337.61 9466.31 90099.86 00:18:29.142 =================================================================================================================== 00:18:29.142 Total : 1234.72 4.82 0.00 0.00 103337.61 9466.31 90099.86 00:18:29.142 0 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3331186 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3331186 ']' 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3331186 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3331186 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3331186' 00:18:29.142 killing process with pid 3331186 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3331186 00:18:29.142 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.142 00:18:29.142 Latency(us) 00:18:29.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.142 =================================================================================================================== 00:18:29.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.142 [2024-07-15 19:13:09.478171] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:29.142 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3331186 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3331024 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3331024 ']' 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3331024 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3331024 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3331024' 00:18:29.399 killing process with pid 3331024 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3331024 00:18:29.399 [2024-07-15 19:13:09.772983] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:29.399 19:13:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3331024 00:18:29.656 19:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:29.656 19:13:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:29.656 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:29.656 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.656 19:13:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3332627 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3332627 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3332627 ']' 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.657 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.915 [2024-07-15 19:13:10.128605] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:29.915 [2024-07-15 19:13:10.128684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.915 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.915 [2024-07-15 19:13:10.191107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.915 [2024-07-15 19:13:10.297053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.915 [2024-07-15 19:13:10.297104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.915 [2024-07-15 19:13:10.297133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.915 [2024-07-15 19:13:10.297144] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.915 [2024-07-15 19:13:10.297154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.915 [2024-07-15 19:13:10.297196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.8O875e69va 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8O875e69va 00:18:30.173 19:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.430 [2024-07-15 19:13:10.705913] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.430 19:13:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.688 19:13:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.946 [2024-07-15 19:13:11.295450] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.946 [2024-07-15 19:13:11.295694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.946 19:13:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.204 malloc0 00:18:31.204 19:13:11 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.462 19:13:11 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8O875e69va 00:18:31.720 [2024-07-15 19:13:12.056430] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3332910 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3332910 /var/tmp/bdevperf.sock 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3332910 ']' 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.720 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.720 [2024-07-15 19:13:12.120240] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:31.720 [2024-07-15 19:13:12.120312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3332910 ] 00:18:31.720 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.978 [2024-07-15 19:13:12.181690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.978 [2024-07-15 19:13:12.297959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.235 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.235 19:13:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.235 19:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8O875e69va 00:18:32.235 19:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:32.493 [2024-07-15 19:13:12.896832] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.750 nvme0n1 00:18:32.750 19:13:12 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.750 Running I/O for 1 seconds... 00:18:34.121 00:18:34.121 Latency(us) 00:18:34.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.121 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:34.121 Verification LBA range: start 0x0 length 0x2000 00:18:34.121 nvme0n1 : 1.05 2047.63 8.00 0.00 0.00 61154.79 6505.05 91653.31 00:18:34.121 =================================================================================================================== 00:18:34.121 Total : 2047.63 8.00 0.00 0.00 61154.79 6505.05 91653.31 00:18:34.121 0 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3332910 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3332910 ']' 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3332910 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3332910 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3332910' 00:18:34.121 killing process with pid 3332910 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3332910 00:18:34.121 Received shutdown signal, test time was about 1.000000 seconds 00:18:34.121 00:18:34.121 Latency(us) 00:18:34.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.121 =================================================================================================================== 00:18:34.121 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3332910 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3332627 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3332627 ']' 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3332627 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3332627 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3332627' 00:18:34.121 killing process with pid 3332627 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3332627 00:18:34.121 [2024-07-15 19:13:14.502971] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:34.121 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3332627 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3333195 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3333195 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3333195 ']' 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.379 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.637 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.637 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.637 19:13:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.637 [2024-07-15 19:13:14.858541] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:34.637 [2024-07-15 19:13:14.858622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.637 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.637 [2024-07-15 19:13:14.920012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.637 [2024-07-15 19:13:15.025986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.637 [2024-07-15 19:13:15.026042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.637 [2024-07-15 19:13:15.026071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.637 [2024-07-15 19:13:15.026084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.637 [2024-07-15 19:13:15.026095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.637 [2024-07-15 19:13:15.026121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.896 [2024-07-15 19:13:15.173699] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.896 malloc0 00:18:34.896 [2024-07-15 19:13:15.205518] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.896 [2024-07-15 19:13:15.205788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3333220 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3333220 /var/tmp/bdevperf.sock 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3333220 ']' 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.896 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.896 [2024-07-15 19:13:15.277477] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:34.896 [2024-07-15 19:13:15.277547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333220 ] 00:18:34.896 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.154 [2024-07-15 19:13:15.338794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.154 [2024-07-15 19:13:15.452743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.154 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.154 19:13:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:35.154 19:13:15 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8O875e69va 00:18:35.719 19:13:15 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:35.719 [2024-07-15 19:13:16.138305] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.977 nvme0n1 00:18:35.977 19:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.977 Running I/O for 1 seconds... 00:18:37.351 00:18:37.351 Latency(us) 00:18:37.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.351 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.351 Verification LBA range: start 0x0 length 0x2000 00:18:37.351 nvme0n1 : 1.06 1995.98 7.80 0.00 0.00 62686.71 10777.03 93595.12 00:18:37.351 =================================================================================================================== 00:18:37.351 Total : 1995.98 7.80 0.00 0.00 62686.71 10777.03 93595.12 00:18:37.351 0 00:18:37.351 19:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:37.351 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.351 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.351 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.351 19:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:37.351 "subsystems": [ 00:18:37.351 { 00:18:37.351 "subsystem": "keyring", 00:18:37.351 "config": [ 00:18:37.351 { 00:18:37.351 "method": "keyring_file_add_key", 00:18:37.351 "params": { 00:18:37.351 "name": "key0", 00:18:37.351 "path": "/tmp/tmp.8O875e69va" 00:18:37.351 } 00:18:37.351 } 00:18:37.351 ] 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "subsystem": "iobuf", 00:18:37.351 "config": [ 00:18:37.351 { 00:18:37.351 "method": "iobuf_set_options", 00:18:37.351 "params": { 00:18:37.351 "small_pool_count": 8192, 00:18:37.351 "large_pool_count": 1024, 00:18:37.351 "small_bufsize": 8192, 00:18:37.351 "large_bufsize": 135168 00:18:37.351 } 00:18:37.351 } 00:18:37.351 ] 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "subsystem": "sock", 00:18:37.351 "config": [ 00:18:37.351 { 00:18:37.351 "method": "sock_set_default_impl", 00:18:37.351 "params": { 00:18:37.351 "impl_name": "posix" 00:18:37.351 } 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "method": "sock_impl_set_options", 00:18:37.351 "params": { 00:18:37.351 "impl_name": "ssl", 00:18:37.351 "recv_buf_size": 4096, 00:18:37.351 "send_buf_size": 4096, 00:18:37.351 "enable_recv_pipe": true, 00:18:37.351 "enable_quickack": false, 00:18:37.351 "enable_placement_id": 0, 00:18:37.351 "enable_zerocopy_send_server": true, 00:18:37.351 "enable_zerocopy_send_client": false, 00:18:37.351 "zerocopy_threshold": 0, 00:18:37.351 "tls_version": 0, 00:18:37.351 "enable_ktls": false 00:18:37.351 } 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "method": "sock_impl_set_options", 00:18:37.351 "params": { 00:18:37.351 "impl_name": "posix", 00:18:37.351 "recv_buf_size": 2097152, 00:18:37.351 "send_buf_size": 2097152, 00:18:37.351 "enable_recv_pipe": true, 00:18:37.351 "enable_quickack": false, 00:18:37.351 "enable_placement_id": 0, 00:18:37.351 "enable_zerocopy_send_server": true, 00:18:37.351 "enable_zerocopy_send_client": false, 00:18:37.351 "zerocopy_threshold": 0, 00:18:37.351 "tls_version": 0, 00:18:37.351 "enable_ktls": false 00:18:37.351 } 00:18:37.351 } 00:18:37.351 ] 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "subsystem": "vmd", 00:18:37.351 "config": [] 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "subsystem": "accel", 00:18:37.351 "config": [ 00:18:37.351 { 00:18:37.351 "method": "accel_set_options", 00:18:37.351 "params": { 00:18:37.351 "small_cache_size": 128, 00:18:37.351 "large_cache_size": 16, 00:18:37.351 "task_count": 2048, 00:18:37.351 "sequence_count": 2048, 00:18:37.351 "buf_count": 2048 00:18:37.351 } 00:18:37.351 } 00:18:37.351 ] 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "subsystem": "bdev", 00:18:37.351 "config": [ 00:18:37.351 { 00:18:37.351 "method": "bdev_set_options", 00:18:37.351 "params": { 00:18:37.351 "bdev_io_pool_size": 65535, 00:18:37.351 "bdev_io_cache_size": 256, 00:18:37.351 "bdev_auto_examine": true, 00:18:37.351 "iobuf_small_cache_size": 128, 00:18:37.351 "iobuf_large_cache_size": 16 00:18:37.351 } 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "method": "bdev_raid_set_options", 00:18:37.351 "params": { 00:18:37.351 "process_window_size_kb": 1024 00:18:37.351 } 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "method": "bdev_iscsi_set_options", 00:18:37.351 "params": { 00:18:37.351 "timeout_sec": 30 00:18:37.351 } 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "method": "bdev_nvme_set_options", 00:18:37.351 "params": { 00:18:37.351 "action_on_timeout": "none", 00:18:37.351 "timeout_us": 0, 00:18:37.351 "timeout_admin_us": 0, 00:18:37.351 "keep_alive_timeout_ms": 10000, 00:18:37.351 "arbitration_burst": 0, 00:18:37.351 "low_priority_weight": 0, 00:18:37.351 "medium_priority_weight": 0, 00:18:37.351 "high_priority_weight": 0, 00:18:37.351 "nvme_adminq_poll_period_us": 10000, 00:18:37.351 "nvme_ioq_poll_period_us": 0, 00:18:37.351 "io_queue_requests": 0, 00:18:37.351 "delay_cmd_submit": true, 00:18:37.351 "transport_retry_count": 4, 00:18:37.351 "bdev_retry_count": 3, 00:18:37.351 "transport_ack_timeout": 0, 00:18:37.351 "ctrlr_loss_timeout_sec": 0, 00:18:37.351 "reconnect_delay_sec": 0, 00:18:37.351 "fast_io_fail_timeout_sec": 0, 00:18:37.351 "disable_auto_failback": false, 00:18:37.351 "generate_uuids": false, 00:18:37.351 "transport_tos": 0, 00:18:37.351 "nvme_error_stat": false, 00:18:37.351 "rdma_srq_size": 0, 00:18:37.351 "io_path_stat": false, 00:18:37.351 "allow_accel_sequence": false, 00:18:37.351 "rdma_max_cq_size": 0, 00:18:37.351 "rdma_cm_event_timeout_ms": 0, 00:18:37.351 "dhchap_digests": [ 00:18:37.351 "sha256", 00:18:37.351 "sha384", 00:18:37.351 "sha512" 00:18:37.351 ], 00:18:37.351 "dhchap_dhgroups": [ 00:18:37.351 "null", 00:18:37.351 "ffdhe2048", 00:18:37.351 "ffdhe3072", 00:18:37.351 "ffdhe4096", 00:18:37.351 "ffdhe6144", 00:18:37.351 "ffdhe8192" 00:18:37.351 ] 00:18:37.351 } 00:18:37.351 }, 00:18:37.351 { 00:18:37.351 "method": "bdev_nvme_set_hotplug", 00:18:37.351 "params": { 00:18:37.351 "period_us": 100000, 00:18:37.351 "enable": false 00:18:37.351 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "bdev_malloc_create", 00:18:37.352 "params": { 00:18:37.352 "name": "malloc0", 00:18:37.352 "num_blocks": 8192, 00:18:37.352 "block_size": 4096, 00:18:37.352 "physical_block_size": 4096, 00:18:37.352 "uuid": "cdff4dcb-6a8e-4508-9918-7b94b6cbf9cb", 00:18:37.352 "optimal_io_boundary": 0 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "bdev_wait_for_examine" 00:18:37.352 } 00:18:37.352 ] 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "subsystem": "nbd", 00:18:37.352 "config": [] 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "subsystem": "scheduler", 00:18:37.352 "config": [ 00:18:37.352 { 00:18:37.352 "method": "framework_set_scheduler", 00:18:37.352 "params": { 00:18:37.352 "name": "static" 00:18:37.352 } 00:18:37.352 } 00:18:37.352 ] 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "subsystem": "nvmf", 00:18:37.352 "config": [ 00:18:37.352 { 00:18:37.352 "method": "nvmf_set_config", 00:18:37.352 "params": { 00:18:37.352 "discovery_filter": "match_any", 00:18:37.352 "admin_cmd_passthru": { 00:18:37.352 "identify_ctrlr": false 00:18:37.352 } 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "nvmf_set_max_subsystems", 00:18:37.352 "params": { 00:18:37.352 "max_subsystems": 1024 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "nvmf_set_crdt", 00:18:37.352 "params": { 00:18:37.352 "crdt1": 0, 00:18:37.352 "crdt2": 0, 00:18:37.352 "crdt3": 0 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "nvmf_create_transport", 00:18:37.352 "params": { 00:18:37.352 "trtype": "TCP", 00:18:37.352 "max_queue_depth": 128, 00:18:37.352 "max_io_qpairs_per_ctrlr": 127, 00:18:37.352 "in_capsule_data_size": 4096, 00:18:37.352 "max_io_size": 131072, 00:18:37.352 "io_unit_size": 131072, 00:18:37.352 "max_aq_depth": 128, 00:18:37.352 "num_shared_buffers": 511, 00:18:37.352 "buf_cache_size": 4294967295, 00:18:37.352 "dif_insert_or_strip": false, 00:18:37.352 "zcopy": false, 00:18:37.352 "c2h_success": false, 00:18:37.352 "sock_priority": 0, 00:18:37.352 "abort_timeout_sec": 1, 00:18:37.352 "ack_timeout": 0, 00:18:37.352 "data_wr_pool_size": 0 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "nvmf_create_subsystem", 00:18:37.352 "params": { 00:18:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.352 "allow_any_host": false, 00:18:37.352 "serial_number": "00000000000000000000", 00:18:37.352 "model_number": "SPDK bdev Controller", 00:18:37.352 "max_namespaces": 32, 00:18:37.352 "min_cntlid": 1, 00:18:37.352 "max_cntlid": 65519, 00:18:37.352 "ana_reporting": false 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "nvmf_subsystem_add_host", 00:18:37.352 "params": { 00:18:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.352 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.352 "psk": "key0" 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "nvmf_subsystem_add_ns", 00:18:37.352 "params": { 00:18:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.352 "namespace": { 00:18:37.352 "nsid": 1, 00:18:37.352 "bdev_name": "malloc0", 00:18:37.352 "nguid": "CDFF4DCB6A8E450899187B94B6CBF9CB", 00:18:37.352 "uuid": "cdff4dcb-6a8e-4508-9918-7b94b6cbf9cb", 00:18:37.352 "no_auto_visible": false 00:18:37.352 } 00:18:37.352 } 00:18:37.352 }, 00:18:37.352 { 00:18:37.352 "method": "nvmf_subsystem_add_listener", 00:18:37.352 "params": { 00:18:37.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.352 "listen_address": { 00:18:37.352 "trtype": "TCP", 00:18:37.352 "adrfam": "IPv4", 00:18:37.352 "traddr": "10.0.0.2", 00:18:37.352 "trsvcid": "4420" 00:18:37.352 }, 00:18:37.352 "secure_channel": true 00:18:37.352 } 00:18:37.352 } 00:18:37.352 ] 00:18:37.352 } 00:18:37.352 ] 00:18:37.352 }' 00:18:37.352 19:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:37.610 19:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:37.610 "subsystems": [ 00:18:37.610 { 00:18:37.610 "subsystem": "keyring", 00:18:37.610 "config": [ 00:18:37.610 { 00:18:37.610 "method": "keyring_file_add_key", 00:18:37.610 "params": { 00:18:37.610 "name": "key0", 00:18:37.610 "path": "/tmp/tmp.8O875e69va" 00:18:37.610 } 00:18:37.610 } 00:18:37.610 ] 00:18:37.610 }, 00:18:37.610 { 00:18:37.610 "subsystem": "iobuf", 00:18:37.610 "config": [ 00:18:37.610 { 00:18:37.610 "method": "iobuf_set_options", 00:18:37.610 "params": { 00:18:37.610 "small_pool_count": 8192, 00:18:37.610 "large_pool_count": 1024, 00:18:37.610 "small_bufsize": 8192, 00:18:37.610 "large_bufsize": 135168 00:18:37.610 } 00:18:37.610 } 00:18:37.610 ] 00:18:37.610 }, 00:18:37.610 { 00:18:37.610 "subsystem": "sock", 00:18:37.610 "config": [ 00:18:37.610 { 00:18:37.610 "method": "sock_set_default_impl", 00:18:37.610 "params": { 00:18:37.610 "impl_name": "posix" 00:18:37.610 } 00:18:37.610 }, 00:18:37.610 { 00:18:37.610 "method": "sock_impl_set_options", 00:18:37.610 "params": { 00:18:37.610 "impl_name": "ssl", 00:18:37.610 "recv_buf_size": 4096, 00:18:37.610 "send_buf_size": 4096, 00:18:37.610 "enable_recv_pipe": true, 00:18:37.610 "enable_quickack": false, 00:18:37.610 "enable_placement_id": 0, 00:18:37.610 "enable_zerocopy_send_server": true, 00:18:37.610 "enable_zerocopy_send_client": false, 00:18:37.610 "zerocopy_threshold": 0, 00:18:37.610 "tls_version": 0, 00:18:37.610 "enable_ktls": false 00:18:37.610 } 00:18:37.610 }, 00:18:37.610 { 00:18:37.610 "method": "sock_impl_set_options", 00:18:37.610 "params": { 00:18:37.610 "impl_name": "posix", 00:18:37.610 "recv_buf_size": 2097152, 00:18:37.610 "send_buf_size": 2097152, 00:18:37.610 "enable_recv_pipe": true, 00:18:37.610 "enable_quickack": false, 00:18:37.610 "enable_placement_id": 0, 00:18:37.610 "enable_zerocopy_send_server": true, 00:18:37.610 "enable_zerocopy_send_client": false, 00:18:37.610 "zerocopy_threshold": 0, 00:18:37.610 "tls_version": 0, 00:18:37.610 "enable_ktls": false 00:18:37.610 } 00:18:37.610 } 00:18:37.610 ] 00:18:37.610 }, 00:18:37.610 { 00:18:37.610 "subsystem": "vmd", 00:18:37.610 "config": [] 00:18:37.610 }, 00:18:37.610 { 00:18:37.610 "subsystem": "accel", 00:18:37.610 "config": [ 00:18:37.610 { 00:18:37.610 "method": "accel_set_options", 00:18:37.610 "params": { 00:18:37.610 "small_cache_size": 128, 00:18:37.610 "large_cache_size": 16, 00:18:37.611 "task_count": 2048, 00:18:37.611 "sequence_count": 2048, 00:18:37.611 "buf_count": 2048 00:18:37.611 } 00:18:37.611 } 00:18:37.611 ] 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "subsystem": "bdev", 00:18:37.611 "config": [ 00:18:37.611 { 00:18:37.611 "method": "bdev_set_options", 00:18:37.611 "params": { 00:18:37.611 "bdev_io_pool_size": 65535, 00:18:37.611 "bdev_io_cache_size": 256, 00:18:37.611 "bdev_auto_examine": true, 00:18:37.611 "iobuf_small_cache_size": 128, 00:18:37.611 "iobuf_large_cache_size": 16 00:18:37.611 } 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "method": "bdev_raid_set_options", 00:18:37.611 "params": { 00:18:37.611 "process_window_size_kb": 1024 00:18:37.611 } 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "method": "bdev_iscsi_set_options", 00:18:37.611 "params": { 00:18:37.611 "timeout_sec": 30 00:18:37.611 } 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "method": "bdev_nvme_set_options", 00:18:37.611 "params": { 00:18:37.611 "action_on_timeout": "none", 00:18:37.611 "timeout_us": 0, 00:18:37.611 "timeout_admin_us": 0, 00:18:37.611 "keep_alive_timeout_ms": 10000, 00:18:37.611 "arbitration_burst": 0, 00:18:37.611 "low_priority_weight": 0, 00:18:37.611 "medium_priority_weight": 0, 00:18:37.611 "high_priority_weight": 0, 00:18:37.611 "nvme_adminq_poll_period_us": 10000, 00:18:37.611 "nvme_ioq_poll_period_us": 0, 00:18:37.611 "io_queue_requests": 512, 00:18:37.611 "delay_cmd_submit": true, 00:18:37.611 "transport_retry_count": 4, 00:18:37.611 "bdev_retry_count": 3, 00:18:37.611 "transport_ack_timeout": 0, 00:18:37.611 "ctrlr_loss_timeout_sec": 0, 00:18:37.611 "reconnect_delay_sec": 0, 00:18:37.611 "fast_io_fail_timeout_sec": 0, 00:18:37.611 "disable_auto_failback": false, 00:18:37.611 "generate_uuids": false, 00:18:37.611 "transport_tos": 0, 00:18:37.611 "nvme_error_stat": false, 00:18:37.611 "rdma_srq_size": 0, 00:18:37.611 "io_path_stat": false, 00:18:37.611 "allow_accel_sequence": false, 00:18:37.611 "rdma_max_cq_size": 0, 00:18:37.611 "rdma_cm_event_timeout_ms": 0, 00:18:37.611 "dhchap_digests": [ 00:18:37.611 "sha256", 00:18:37.611 "sha384", 00:18:37.611 "sha512" 00:18:37.611 ], 00:18:37.611 "dhchap_dhgroups": [ 00:18:37.611 "null", 00:18:37.611 "ffdhe2048", 00:18:37.611 "ffdhe3072", 00:18:37.611 "ffdhe4096", 00:18:37.611 "ffdhe6144", 00:18:37.611 "ffdhe8192" 00:18:37.611 ] 00:18:37.611 } 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "method": "bdev_nvme_attach_controller", 00:18:37.611 "params": { 00:18:37.611 "name": "nvme0", 00:18:37.611 "trtype": "TCP", 00:18:37.611 "adrfam": "IPv4", 00:18:37.611 "traddr": "10.0.0.2", 00:18:37.611 "trsvcid": "4420", 00:18:37.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.611 "prchk_reftag": false, 00:18:37.611 "prchk_guard": false, 00:18:37.611 "ctrlr_loss_timeout_sec": 0, 00:18:37.611 "reconnect_delay_sec": 0, 00:18:37.611 "fast_io_fail_timeout_sec": 0, 00:18:37.611 "psk": "key0", 00:18:37.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.611 "hdgst": false, 00:18:37.611 "ddgst": false 00:18:37.611 } 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "method": "bdev_nvme_set_hotplug", 00:18:37.611 "params": { 00:18:37.611 "period_us": 100000, 00:18:37.611 "enable": false 00:18:37.611 } 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "method": "bdev_enable_histogram", 00:18:37.611 "params": { 00:18:37.611 "name": "nvme0n1", 00:18:37.611 "enable": true 00:18:37.611 } 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "method": "bdev_wait_for_examine" 00:18:37.611 } 00:18:37.611 ] 00:18:37.611 }, 00:18:37.611 { 00:18:37.611 "subsystem": "nbd", 00:18:37.611 "config": [] 00:18:37.611 } 00:18:37.611 ] 00:18:37.611 }' 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3333220 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3333220 ']' 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3333220 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3333220 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3333220' 00:18:37.611 killing process with pid 3333220 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3333220 00:18:37.611 Received shutdown signal, test time was about 1.000000 seconds 00:18:37.611 00:18:37.611 Latency(us) 00:18:37.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.611 =================================================================================================================== 00:18:37.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.611 19:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3333220 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3333195 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3333195 ']' 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3333195 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3333195 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3333195' 00:18:37.869 killing process with pid 3333195 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3333195 00:18:37.869 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3333195 00:18:38.127 19:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:38.127 19:13:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:38.127 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.127 19:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:38.127 "subsystems": [ 00:18:38.127 { 00:18:38.127 "subsystem": "keyring", 00:18:38.127 "config": [ 00:18:38.127 { 00:18:38.127 "method": "keyring_file_add_key", 00:18:38.127 "params": { 00:18:38.127 "name": "key0", 00:18:38.127 "path": "/tmp/tmp.8O875e69va" 00:18:38.127 } 00:18:38.127 } 00:18:38.127 ] 00:18:38.127 }, 00:18:38.127 { 00:18:38.127 "subsystem": "iobuf", 00:18:38.127 "config": [ 00:18:38.127 { 00:18:38.127 "method": "iobuf_set_options", 00:18:38.127 "params": { 00:18:38.127 "small_pool_count": 8192, 00:18:38.127 "large_pool_count": 1024, 00:18:38.127 "small_bufsize": 8192, 00:18:38.127 "large_bufsize": 135168 00:18:38.127 } 00:18:38.127 } 00:18:38.127 ] 00:18:38.127 }, 00:18:38.127 { 00:18:38.127 "subsystem": "sock", 00:18:38.127 "config": [ 00:18:38.127 { 00:18:38.127 "method": "sock_set_default_impl", 00:18:38.128 "params": { 00:18:38.128 "impl_name": "posix" 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "sock_impl_set_options", 00:18:38.128 "params": { 00:18:38.128 "impl_name": "ssl", 00:18:38.128 "recv_buf_size": 4096, 00:18:38.128 "send_buf_size": 4096, 00:18:38.128 "enable_recv_pipe": true, 00:18:38.128 "enable_quickack": false, 00:18:38.128 "enable_placement_id": 0, 00:18:38.128 "enable_zerocopy_send_server": true, 00:18:38.128 "enable_zerocopy_send_client": false, 00:18:38.128 "zerocopy_threshold": 0, 00:18:38.128 "tls_version": 0, 00:18:38.128 "enable_ktls": false 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "sock_impl_set_options", 00:18:38.128 "params": { 00:18:38.128 "impl_name": "posix", 00:18:38.128 "recv_buf_size": 2097152, 00:18:38.128 "send_buf_size": 2097152, 00:18:38.128 "enable_recv_pipe": true, 00:18:38.128 "enable_quickack": false, 00:18:38.128 "enable_placement_id": 0, 00:18:38.128 "enable_zerocopy_send_server": true, 00:18:38.128 "enable_zerocopy_send_client": false, 00:18:38.128 "zerocopy_threshold": 0, 00:18:38.128 "tls_version": 0, 00:18:38.128 "enable_ktls": false 00:18:38.128 } 00:18:38.128 } 00:18:38.128 ] 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "subsystem": "vmd", 00:18:38.128 "config": [] 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "subsystem": "accel", 00:18:38.128 "config": [ 00:18:38.128 { 00:18:38.128 "method": "accel_set_options", 00:18:38.128 "params": { 00:18:38.128 "small_cache_size": 128, 00:18:38.128 "large_cache_size": 16, 00:18:38.128 "task_count": 2048, 00:18:38.128 "sequence_count": 2048, 00:18:38.128 "buf_count": 2048 00:18:38.128 } 00:18:38.128 } 00:18:38.128 ] 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "subsystem": "bdev", 00:18:38.128 "config": [ 00:18:38.128 { 00:18:38.128 "method": "bdev_set_options", 00:18:38.128 "params": { 00:18:38.128 "bdev_io_pool_size": 65535, 00:18:38.128 "bdev_io_cache_size": 256, 00:18:38.128 "bdev_auto_examine": true, 00:18:38.128 "iobuf_small_cache_size": 128, 00:18:38.128 "iobuf_large_cache_size": 16 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "bdev_raid_set_options", 00:18:38.128 "params": { 00:18:38.128 "process_window_size_kb": 1024 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "bdev_iscsi_set_options", 00:18:38.128 "params": { 00:18:38.128 "timeout_sec": 30 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "bdev_nvme_set_options", 00:18:38.128 "params": { 00:18:38.128 "action_on_timeout": "none", 00:18:38.128 "timeout_us": 0, 00:18:38.128 "timeout_admin_us": 0, 00:18:38.128 "keep_alive_timeout_ms": 10000, 00:18:38.128 "arbitration_burst": 0, 00:18:38.128 "low_priority_weight": 0, 00:18:38.128 "medium_priority_weight": 0, 00:18:38.128 "high_priority_weight": 0, 00:18:38.128 "nvme_adminq_poll_period_us": 10000, 00:18:38.128 "nvme_ioq_poll_period_us": 0, 00:18:38.128 "io_queue_requests": 0, 00:18:38.128 "delay_cmd_submit": true, 00:18:38.128 "transport_retry_count": 4, 00:18:38.128 "bdev_retry_count": 3, 00:18:38.128 "transport_ack_timeout": 0, 00:18:38.128 "ctrlr_loss_timeout_sec": 0, 00:18:38.128 "reconnect_delay_sec": 0, 00:18:38.128 "fast_io_fail_timeout_sec": 0, 00:18:38.128 "disable_auto_failback": false, 00:18:38.128 "generate_uuids": false, 00:18:38.128 "transport_tos": 0, 00:18:38.128 "nvme_error_stat": false, 00:18:38.128 "rdma_srq_size": 0, 00:18:38.128 "io_path_stat": false, 00:18:38.128 "allow_accel_sequence": false, 00:18:38.128 "rdma_max_cq_size": 0, 00:18:38.128 "rdma_cm_event_timeout_ms": 0, 00:18:38.128 "dhchap_digests": [ 00:18:38.128 "sha256", 00:18:38.128 "sha384", 00:18:38.128 "sha512" 00:18:38.128 ], 00:18:38.128 "dhchap_dhgroups": [ 00:18:38.128 "null", 00:18:38.128 "ffdhe2048", 00:18:38.128 "ffdhe3072", 00:18:38.128 "ffdhe4096", 00:18:38.128 "ffdhe6144", 00:18:38.128 "ffdhe8192" 00:18:38.128 ] 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "bdev_nvme_set_hotplug", 00:18:38.128 "params": { 00:18:38.128 "period_us": 100000, 00:18:38.128 "enable": false 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "bdev_malloc_create", 00:18:38.128 "params": { 00:18:38.128 "name": "malloc0", 00:18:38.128 "num_blocks": 8192, 00:18:38.128 "block_size": 4096, 00:18:38.128 "physical_block_size": 4096, 00:18:38.128 "uuid": "cdff4dcb-6a8e-4508-9918-7b94b6cbf9cb", 00:18:38.128 "optimal_io_boundary": 0 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "bdev_wait_for_examine" 00:18:38.128 } 00:18:38.128 ] 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "subsystem": "nbd", 00:18:38.128 "config": [] 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "subsystem": "scheduler", 00:18:38.128 "config": [ 00:18:38.128 { 00:18:38.128 "method": "framework_set_scheduler", 00:18:38.128 "params": { 00:18:38.128 "name": "static" 00:18:38.128 } 00:18:38.128 } 00:18:38.128 ] 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "subsystem": "nvmf", 00:18:38.128 "config": [ 00:18:38.128 { 00:18:38.128 "method": "nvmf_set_config", 00:18:38.128 "params": { 00:18:38.128 "discovery_filter": "match_any", 00:18:38.128 "admin_cmd_passthru": { 00:18:38.128 "identify_ctrlr": false 00:18:38.128 } 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "nvmf_set_max_subsystems", 00:18:38.128 "params": { 00:18:38.128 "max_subsystems": 1024 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "nvmf_set_crdt", 00:18:38.128 "params": { 00:18:38.128 "crdt1": 0, 00:18:38.128 "crdt2": 0, 00:18:38.128 "crdt3": 0 00:18:38.128 } 00:18:38.128 }, 00:18:38.128 { 00:18:38.128 "method": "nvmf_create_transport", 00:18:38.128 "params": { 00:18:38.128 "trtype": "TCP", 00:18:38.128 "max_queue_depth": 128, 00:18:38.128 "max_io_qpairs_per_ctrlr": 127, 00:18:38.128 "in_capsule_data_size": 4096, 00:18:38.128 "max_io_size": 131072, 00:18:38.128 "io_unit_size": 131072, 00:18:38.128 "max_aq_depth": 128, 00:18:38.128 "num_shared_buffers": 511, 00:18:38.128 "buf_cache_size": 4294967295, 00:18:38.128 "dif_insert_or_strip": false, 00:18:38.128 "zcopy": false, 00:18:38.128 "c2h_success": false, 00:18:38.128 "sock_priority": 0, 00:18:38.129 "abort_timeout_sec": 1, 00:18:38.129 "ack_timeout": 0, 00:18:38.129 "data_wr_pool_size": 0 00:18:38.129 } 00:18:38.129 }, 00:18:38.129 { 00:18:38.129 "method": "nvmf_create_subsystem", 00:18:38.129 "params": { 00:18:38.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.129 "allow_any_host": false, 00:18:38.129 "serial_number": "00000000000000000000", 00:18:38.129 "model_number": "SPDK bdev Controller", 00:18:38.129 "max_namespaces": 32, 00:18:38.129 "min_cntlid": 1, 00:18:38.129 "max_cntlid": 65519, 00:18:38.129 "ana_reporting": false 00:18:38.129 } 00:18:38.129 }, 00:18:38.129 { 00:18:38.129 "method": "nvmf_subsystem_add_host", 00:18:38.129 "params": { 00:18:38.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.129 "host": "nqn.2016-06.io.spdk:host1", 00:18:38.129 "psk": "key0" 00:18:38.129 } 00:18:38.129 }, 00:18:38.129 { 00:18:38.129 "method": "nvmf_subsystem_add_ns", 00:18:38.129 "params": { 00:18:38.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.129 "namespace": { 00:18:38.129 "nsid": 1, 00:18:38.129 "bdev_name": "malloc0", 00:18:38.129 "nguid": "CDFF4DCB6A8E450899187B94B6CBF9CB", 00:18:38.129 "uuid": "cdff4dcb-6a8e-4508-9918-7b94b6cbf9cb", 00:18:38.129 "no_auto_visible": false 00:18:38.129 } 00:18:38.129 } 00:18:38.129 }, 00:18:38.129 { 00:18:38.129 "method": "nvmf_subsystem_add_listener", 00:18:38.129 "params": { 00:18:38.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.129 "listen_address": { 00:18:38.129 "trtype": "TCP", 00:18:38.129 "adrfam": "IPv4", 00:18:38.129 "traddr": "10.0.0.2", 00:18:38.129 "trsvcid": "4420" 00:18:38.129 }, 00:18:38.129 "secure_channel": true 00:18:38.129 } 00:18:38.129 } 00:18:38.129 ] 00:18:38.129 } 00:18:38.129 ] 00:18:38.129 }' 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3333635 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3333635 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3333635 ']' 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.129 19:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.129 [2024-07-15 19:13:18.542404] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:38.129 [2024-07-15 19:13:18.542506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.420 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.420 [2024-07-15 19:13:18.606733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.420 [2024-07-15 19:13:18.712867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.420 [2024-07-15 19:13:18.712938] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.420 [2024-07-15 19:13:18.712967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.420 [2024-07-15 19:13:18.712978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.420 [2024-07-15 19:13:18.712989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.420 [2024-07-15 19:13:18.713069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.679 [2024-07-15 19:13:18.951235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.679 [2024-07-15 19:13:18.983252] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.679 [2024-07-15 19:13:18.991078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3333786 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3333786 /var/tmp/bdevperf.sock 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3333786 ']' 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.246 19:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:39.246 "subsystems": [ 00:18:39.246 { 00:18:39.246 "subsystem": "keyring", 00:18:39.246 "config": [ 00:18:39.246 { 00:18:39.246 "method": "keyring_file_add_key", 00:18:39.246 "params": { 00:18:39.246 "name": "key0", 00:18:39.246 "path": "/tmp/tmp.8O875e69va" 00:18:39.246 } 00:18:39.246 } 00:18:39.246 ] 00:18:39.246 }, 00:18:39.246 { 00:18:39.246 "subsystem": "iobuf", 00:18:39.246 "config": [ 00:18:39.246 { 00:18:39.246 "method": "iobuf_set_options", 00:18:39.246 "params": { 00:18:39.246 "small_pool_count": 8192, 00:18:39.246 "large_pool_count": 1024, 00:18:39.246 "small_bufsize": 8192, 00:18:39.246 "large_bufsize": 135168 00:18:39.246 } 00:18:39.246 } 00:18:39.246 ] 00:18:39.246 }, 00:18:39.246 { 00:18:39.246 "subsystem": "sock", 00:18:39.246 "config": [ 00:18:39.246 { 00:18:39.246 "method": "sock_set_default_impl", 00:18:39.246 "params": { 00:18:39.246 "impl_name": "posix" 00:18:39.246 } 00:18:39.246 }, 00:18:39.246 { 00:18:39.246 "method": "sock_impl_set_options", 00:18:39.246 "params": { 00:18:39.246 "impl_name": "ssl", 00:18:39.246 "recv_buf_size": 4096, 00:18:39.246 "send_buf_size": 4096, 00:18:39.246 "enable_recv_pipe": true, 00:18:39.246 "enable_quickack": false, 00:18:39.246 "enable_placement_id": 0, 00:18:39.246 "enable_zerocopy_send_server": true, 00:18:39.246 "enable_zerocopy_send_client": false, 00:18:39.246 "zerocopy_threshold": 0, 00:18:39.246 "tls_version": 0, 00:18:39.246 "enable_ktls": false 00:18:39.246 } 00:18:39.246 }, 00:18:39.246 { 00:18:39.246 "method": "sock_impl_set_options", 00:18:39.246 "params": { 00:18:39.246 "impl_name": "posix", 00:18:39.246 "recv_buf_size": 2097152, 00:18:39.246 "send_buf_size": 2097152, 00:18:39.246 "enable_recv_pipe": true, 00:18:39.246 "enable_quickack": false, 00:18:39.246 "enable_placement_id": 0, 00:18:39.246 "enable_zerocopy_send_server": true, 00:18:39.246 "enable_zerocopy_send_client": false, 00:18:39.246 "zerocopy_threshold": 0, 00:18:39.246 "tls_version": 0, 00:18:39.246 "enable_ktls": false 00:18:39.246 } 00:18:39.246 } 00:18:39.246 ] 00:18:39.246 }, 00:18:39.246 { 00:18:39.246 "subsystem": "vmd", 00:18:39.246 "config": [] 00:18:39.246 }, 00:18:39.246 { 00:18:39.246 "subsystem": "accel", 00:18:39.246 "config": [ 00:18:39.246 { 00:18:39.246 "method": "accel_set_options", 00:18:39.246 "params": { 00:18:39.246 "small_cache_size": 128, 00:18:39.246 "large_cache_size": 16, 00:18:39.246 "task_count": 2048, 00:18:39.246 "sequence_count": 2048, 00:18:39.246 "buf_count": 2048 00:18:39.246 } 00:18:39.246 } 00:18:39.246 ] 00:18:39.246 }, 00:18:39.246 { 00:18:39.246 "subsystem": "bdev", 00:18:39.246 "config": [ 00:18:39.246 { 00:18:39.246 "method": "bdev_set_options", 00:18:39.246 "params": { 00:18:39.247 "bdev_io_pool_size": 65535, 00:18:39.247 "bdev_io_cache_size": 256, 00:18:39.247 "bdev_auto_examine": true, 00:18:39.247 "iobuf_small_cache_size": 128, 00:18:39.247 "iobuf_large_cache_size": 16 00:18:39.247 } 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "method": "bdev_raid_set_options", 00:18:39.247 "params": { 00:18:39.247 "process_window_size_kb": 1024 00:18:39.247 } 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "method": "bdev_iscsi_set_options", 00:18:39.247 "params": { 00:18:39.247 "timeout_sec": 30 00:18:39.247 } 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "method": "bdev_nvme_set_options", 00:18:39.247 "params": { 00:18:39.247 "action_on_timeout": "none", 00:18:39.247 "timeout_us": 0, 00:18:39.247 "timeout_admin_us": 0, 00:18:39.247 "keep_alive_timeout_ms": 10000, 00:18:39.247 "arbitration_burst": 0, 00:18:39.247 "low_priority_weight": 0, 00:18:39.247 "medium_priority_weight": 0, 00:18:39.247 "high_priority_weight": 0, 00:18:39.247 "nvme_adminq_poll_period_us": 10000, 00:18:39.247 "nvme_ioq_poll_period_us": 0, 00:18:39.247 "io_queue_requests": 512, 00:18:39.247 "delay_cmd_submit": true, 00:18:39.247 "transport_retry_count": 4, 00:18:39.247 "bdev_retry_count": 3, 00:18:39.247 "transport_ack_timeout": 0, 00:18:39.247 "ctrlr_loss_timeout_sec": 0, 00:18:39.247 "reconnect_delay_sec": 0, 00:18:39.247 "fast_io_fail_timeout_sec": 0, 00:18:39.247 "disable_auto_failback": false, 00:18:39.247 "generate_uuids": false, 00:18:39.247 "transport_tos": 0, 00:18:39.247 "nvme_error_stat": false, 00:18:39.247 "rdma_srq_size": 0, 00:18:39.247 "io_path_stat": false, 00:18:39.247 "allow_accel_sequence": false, 00:18:39.247 "rdma_max_cq_size": 0, 00:18:39.247 "rdma_cm_event_timeout_ms": 0, 00:18:39.247 "dhchap_digests": [ 00:18:39.247 "sha256", 00:18:39.247 "sha384", 00:18:39.247 "sha512" 00:18:39.247 ], 00:18:39.247 "dhchap_dhgroups": [ 00:18:39.247 "null", 00:18:39.247 "ffdhe2048", 00:18:39.247 "ffdhe3072", 00:18:39.247 "ffdhe4096", 00:18:39.247 "ffdhe6144", 00:18:39.247 "ffdhe8192" 00:18:39.247 ] 00:18:39.247 } 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "method": "bdev_nvme_attach_controller", 00:18:39.247 "params": { 00:18:39.247 "name": "nvme0", 00:18:39.247 "trtype": "TCP", 00:18:39.247 "adrfam": "IPv4", 00:18:39.247 "traddr": "10.0.0.2", 00:18:39.247 "trsvcid": "4420", 00:18:39.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.247 "prchk_reftag": false, 00:18:39.247 "prchk_guard": false, 00:18:39.247 "ctrlr_loss_timeout_sec": 0, 00:18:39.247 "reconnect_delay_sec": 0, 00:18:39.247 "fast_io_fail_timeout_sec": 0, 00:18:39.247 "psk": "key0", 00:18:39.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.247 "hdgst": false, 00:18:39.247 "ddgst": false 00:18:39.247 } 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "method": "bdev_nvme_set_hotplug", 00:18:39.247 "params": { 00:18:39.247 "period_us": 100000, 00:18:39.247 "enable": false 00:18:39.247 } 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "method": "bdev_enable_histogram", 00:18:39.247 "params": { 00:18:39.247 "name": "nvme0n1", 00:18:39.247 "enable": true 00:18:39.247 } 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "method": "bdev_wait_for_examine" 00:18:39.247 } 00:18:39.247 ] 00:18:39.247 }, 00:18:39.247 { 00:18:39.247 "subsystem": "nbd", 00:18:39.247 "config": [] 00:18:39.247 } 00:18:39.247 ] 00:18:39.247 }' 00:18:39.247 [2024-07-15 19:13:19.601613] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:39.247 [2024-07-15 19:13:19.601698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333786 ] 00:18:39.247 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.247 [2024-07-15 19:13:19.663673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.506 [2024-07-15 19:13:19.780336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.765 [2024-07-15 19:13:19.962193] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.330 19:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.330 19:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:40.330 19:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:40.330 19:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:40.588 19:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.588 19:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.588 Running I/O for 1 seconds... 00:18:41.957 00:18:41.957 Latency(us) 00:18:41.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.957 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:41.957 Verification LBA range: start 0x0 length 0x2000 00:18:41.957 nvme0n1 : 1.06 2053.72 8.02 0.00 0.00 60875.09 10437.21 96313.65 00:18:41.957 =================================================================================================================== 00:18:41.957 Total : 2053.72 8.02 0.00 0.00 60875.09 10437.21 96313.65 00:18:41.957 0 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:41.957 19:13:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:41.957 nvmf_trace.0 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3333786 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3333786 ']' 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3333786 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3333786 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3333786' 00:18:41.957 killing process with pid 3333786 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3333786 00:18:41.957 Received shutdown signal, test time was about 1.000000 seconds 00:18:41.957 00:18:41.957 Latency(us) 00:18:41.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.957 =================================================================================================================== 00:18:41.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3333786 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:41.957 rmmod nvme_tcp 00:18:41.957 rmmod nvme_fabrics 00:18:41.957 rmmod nvme_keyring 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3333635 ']' 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3333635 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3333635 ']' 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3333635 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.957 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3333635 00:18:42.214 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.214 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.214 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3333635' 00:18:42.214 killing process with pid 3333635 00:18:42.214 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3333635 00:18:42.214 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3333635 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.521 19:13:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.417 19:13:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.417 19:13:24 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uD5sC5vF5H /tmp/tmp.3yGSkyc3xd /tmp/tmp.8O875e69va 00:18:44.417 00:18:44.417 real 1m22.125s 00:18:44.417 user 2m3.881s 00:18:44.417 sys 0m28.548s 00:18:44.417 19:13:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.417 19:13:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.417 ************************************ 00:18:44.417 END TEST nvmf_tls 00:18:44.417 ************************************ 00:18:44.417 19:13:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.417 19:13:24 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:44.417 19:13:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.417 19:13:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.417 19:13:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.417 ************************************ 00:18:44.417 START TEST nvmf_fips 00:18:44.417 ************************************ 00:18:44.417 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:44.417 * Looking for test storage... 00:18:44.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:44.417 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.418 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.677 19:13:24 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:44.678 Error setting digest 00:18:44.678 0082F852B47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:44.678 0082F852B47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.678 19:13:24 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.678 19:13:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.678 19:13:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.678 19:13:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:44.678 19:13:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:44.678 19:13:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:44.678 19:13:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.223 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:47.224 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:47.224 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:47.224 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:47.224 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:18:47.224 00:18:47.224 --- 10.0.0.2 ping statistics --- 00:18:47.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.224 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:18:47.224 00:18:47.224 --- 10.0.0.1 ping statistics --- 00:18:47.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.224 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3336144 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3336144 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3336144 ']' 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.224 19:13:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.224 [2024-07-15 19:13:27.313908] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:47.224 [2024-07-15 19:13:27.314005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.224 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.224 [2024-07-15 19:13:27.376577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.224 [2024-07-15 19:13:27.481096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.224 [2024-07-15 19:13:27.481165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.224 [2024-07-15 19:13:27.481178] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.224 [2024-07-15 19:13:27.481189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.224 [2024-07-15 19:13:27.481198] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.224 [2024-07-15 19:13:27.481223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:48.158 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.158 [2024-07-15 19:13:28.542517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.158 [2024-07-15 19:13:28.558501] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.158 [2024-07-15 19:13:28.558770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.417 [2024-07-15 19:13:28.591024] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:48.417 malloc0 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3336304 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3336304 /var/tmp/bdevperf.sock 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3336304 ']' 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.417 19:13:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:48.417 [2024-07-15 19:13:28.684288] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:18:48.417 [2024-07-15 19:13:28.684370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336304 ] 00:18:48.417 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.417 [2024-07-15 19:13:28.745211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.675 [2024-07-15 19:13:28.856816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.241 19:13:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.241 19:13:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:49.241 19:13:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:49.499 [2024-07-15 19:13:29.921449] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.499 [2024-07-15 19:13:29.921570] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:49.757 TLSTESTn1 00:18:49.757 19:13:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.757 Running I/O for 10 seconds... 00:19:01.963 00:19:01.963 Latency(us) 00:19:01.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.963 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:01.963 Verification LBA range: start 0x0 length 0x2000 00:19:01.963 TLSTESTn1 : 10.05 2239.24 8.75 0.00 0.00 57003.35 6262.33 90488.23 00:19:01.963 =================================================================================================================== 00:19:01.963 Total : 2239.24 8.75 0.00 0.00 57003.35 6262.33 90488.23 00:19:01.963 0 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:01.963 nvmf_trace.0 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3336304 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3336304 ']' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3336304 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3336304 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3336304' 00:19:01.963 killing process with pid 3336304 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3336304 00:19:01.963 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.963 00:19:01.963 Latency(us) 00:19:01.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.963 =================================================================================================================== 00:19:01.963 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.963 [2024-07-15 19:13:40.319411] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3336304 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.963 rmmod nvme_tcp 00:19:01.963 rmmod nvme_fabrics 00:19:01.963 rmmod nvme_keyring 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3336144 ']' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3336144 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3336144 ']' 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3336144 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:01.963 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3336144 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3336144' 00:19:01.964 killing process with pid 3336144 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3336144 00:19:01.964 [2024-07-15 19:13:40.690991] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3336144 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.964 19:13:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.896 19:13:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:02.896 19:13:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:02.896 00:19:02.896 real 0m18.245s 00:19:02.896 user 0m23.324s 00:19:02.896 sys 0m6.493s 00:19:02.896 19:13:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:02.896 19:13:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:02.896 ************************************ 00:19:02.896 END TEST nvmf_fips 00:19:02.896 ************************************ 00:19:02.896 19:13:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:02.896 19:13:43 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:02.896 19:13:43 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:02.896 19:13:43 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:02.896 19:13:43 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:02.896 19:13:43 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.896 19:13:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:04.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:04.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:04.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:04.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:04.858 19:13:44 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.859 19:13:44 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.859 19:13:44 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.859 19:13:44 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:04.859 19:13:44 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:04.859 19:13:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:04.859 19:13:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:04.859 19:13:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.859 ************************************ 00:19:04.859 START TEST nvmf_perf_adq 00:19:04.859 ************************************ 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:04.859 * Looking for test storage... 00:19:04.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.859 19:13:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.760 19:13:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.761 19:13:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:06.761 19:13:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:06.761 19:13:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:06.761 19:13:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:07.328 19:13:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:09.230 19:13:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.501 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:14.502 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:14.502 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:14.502 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:14.502 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:19:14.502 00:19:14.502 --- 10.0.0.2 ping statistics --- 00:19:14.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.502 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:19:14.502 00:19:14.502 --- 10.0.0.1 ping statistics --- 00:19:14.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.502 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3342171 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3342171 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3342171 ']' 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.502 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.502 [2024-07-15 19:13:54.743170] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:19:14.503 [2024-07-15 19:13:54.743268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.503 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.503 [2024-07-15 19:13:54.812196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.503 [2024-07-15 19:13:54.924982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.503 [2024-07-15 19:13:54.925040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.503 [2024-07-15 19:13:54.925068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.503 [2024-07-15 19:13:54.925079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.503 [2024-07-15 19:13:54.925089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.503 [2024-07-15 19:13:54.925404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.503 [2024-07-15 19:13:54.925428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.503 [2024-07-15 19:13:54.925481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.503 [2024-07-15 19:13:54.925484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.761 19:13:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.761 [2024-07-15 19:13:55.146874] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.761 Malloc1 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.761 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:15.019 [2024-07-15 19:13:55.198499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3342322 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:15.019 19:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:15.019 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:16.917 "tick_rate": 2700000000, 00:19:16.917 "poll_groups": [ 00:19:16.917 { 00:19:16.917 "name": "nvmf_tgt_poll_group_000", 00:19:16.917 "admin_qpairs": 1, 00:19:16.917 "io_qpairs": 1, 00:19:16.917 "current_admin_qpairs": 1, 00:19:16.917 "current_io_qpairs": 1, 00:19:16.917 "pending_bdev_io": 0, 00:19:16.917 "completed_nvme_io": 21739, 00:19:16.917 "transports": [ 00:19:16.917 { 00:19:16.917 "trtype": "TCP" 00:19:16.917 } 00:19:16.917 ] 00:19:16.917 }, 00:19:16.917 { 00:19:16.917 "name": "nvmf_tgt_poll_group_001", 00:19:16.917 "admin_qpairs": 0, 00:19:16.917 "io_qpairs": 1, 00:19:16.917 "current_admin_qpairs": 0, 00:19:16.917 "current_io_qpairs": 1, 00:19:16.917 "pending_bdev_io": 0, 00:19:16.917 "completed_nvme_io": 19527, 00:19:16.917 "transports": [ 00:19:16.917 { 00:19:16.917 "trtype": "TCP" 00:19:16.917 } 00:19:16.917 ] 00:19:16.917 }, 00:19:16.917 { 00:19:16.917 "name": "nvmf_tgt_poll_group_002", 00:19:16.917 "admin_qpairs": 0, 00:19:16.917 "io_qpairs": 1, 00:19:16.917 "current_admin_qpairs": 0, 00:19:16.917 "current_io_qpairs": 1, 00:19:16.917 "pending_bdev_io": 0, 00:19:16.917 "completed_nvme_io": 20655, 00:19:16.917 "transports": [ 00:19:16.917 { 00:19:16.917 "trtype": "TCP" 00:19:16.917 } 00:19:16.917 ] 00:19:16.917 }, 00:19:16.917 { 00:19:16.917 "name": "nvmf_tgt_poll_group_003", 00:19:16.917 "admin_qpairs": 0, 00:19:16.917 "io_qpairs": 1, 00:19:16.917 "current_admin_qpairs": 0, 00:19:16.917 "current_io_qpairs": 1, 00:19:16.917 "pending_bdev_io": 0, 00:19:16.917 "completed_nvme_io": 19750, 00:19:16.917 "transports": [ 00:19:16.917 { 00:19:16.917 "trtype": "TCP" 00:19:16.917 } 00:19:16.917 ] 00:19:16.917 } 00:19:16.917 ] 00:19:16.917 }' 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:16.917 19:13:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3342322 00:19:25.024 Initializing NVMe Controllers 00:19:25.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:25.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:25.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:25.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:25.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:25.024 Initialization complete. Launching workers. 00:19:25.024 ======================================================== 00:19:25.024 Latency(us) 00:19:25.024 Device Information : IOPS MiB/s Average min max 00:19:25.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10176.57 39.75 6289.87 1388.89 10858.45 00:19:25.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10220.77 39.92 6261.21 1793.38 49692.55 00:19:25.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10576.86 41.32 6050.73 1487.51 9071.10 00:19:25.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11147.44 43.54 5740.57 2078.49 8601.74 00:19:25.024 ======================================================== 00:19:25.024 Total : 42121.64 164.54 6077.49 1388.89 49692.55 00:19:25.024 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.024 rmmod nvme_tcp 00:19:25.024 rmmod nvme_fabrics 00:19:25.024 rmmod nvme_keyring 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3342171 ']' 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3342171 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3342171 ']' 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3342171 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3342171 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3342171' 00:19:25.024 killing process with pid 3342171 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3342171 00:19:25.024 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3342171 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.282 19:14:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.812 19:14:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:27.812 19:14:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:27.812 19:14:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:28.072 19:14:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:29.974 19:14:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.249 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.249 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.249 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.250 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.250 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:35.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:35.250 00:19:35.250 --- 10.0.0.2 ping statistics --- 00:19:35.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.250 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:35.250 00:19:35.250 --- 10.0.0.1 ping statistics --- 00:19:35.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.250 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:35.250 net.core.busy_poll = 1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:35.250 net.core.busy_read = 1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:35.250 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3344937 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3344937 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3344937 ']' 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.510 19:14:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:35.510 [2024-07-15 19:14:15.753640] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:19:35.510 [2024-07-15 19:14:15.753728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.510 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.510 [2024-07-15 19:14:15.823794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.768 [2024-07-15 19:14:15.941812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.768 [2024-07-15 19:14:15.941869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.768 [2024-07-15 19:14:15.941893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.768 [2024-07-15 19:14:15.941906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.768 [2024-07-15 19:14:15.941917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.768 [2024-07-15 19:14:15.942007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.768 [2024-07-15 19:14:15.942062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.768 [2024-07-15 19:14:15.942102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.768 [2024-07-15 19:14:15.942105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.333 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.592 [2024-07-15 19:14:16.861498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.592 Malloc1 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.592 [2024-07-15 19:14:16.914843] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3345097 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:36.592 19:14:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:36.592 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.492 19:14:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:38.492 19:14:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.492 19:14:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.751 19:14:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.751 19:14:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:38.751 "tick_rate": 2700000000, 00:19:38.751 "poll_groups": [ 00:19:38.751 { 00:19:38.751 "name": "nvmf_tgt_poll_group_000", 00:19:38.751 "admin_qpairs": 1, 00:19:38.751 "io_qpairs": 2, 00:19:38.751 "current_admin_qpairs": 1, 00:19:38.751 "current_io_qpairs": 2, 00:19:38.751 "pending_bdev_io": 0, 00:19:38.751 "completed_nvme_io": 22972, 00:19:38.751 "transports": [ 00:19:38.751 { 00:19:38.751 "trtype": "TCP" 00:19:38.751 } 00:19:38.751 ] 00:19:38.751 }, 00:19:38.751 { 00:19:38.751 "name": "nvmf_tgt_poll_group_001", 00:19:38.751 "admin_qpairs": 0, 00:19:38.751 "io_qpairs": 2, 00:19:38.751 "current_admin_qpairs": 0, 00:19:38.751 "current_io_qpairs": 2, 00:19:38.751 "pending_bdev_io": 0, 00:19:38.751 "completed_nvme_io": 27205, 00:19:38.751 "transports": [ 00:19:38.751 { 00:19:38.751 "trtype": "TCP" 00:19:38.751 } 00:19:38.751 ] 00:19:38.751 }, 00:19:38.751 { 00:19:38.751 "name": "nvmf_tgt_poll_group_002", 00:19:38.751 "admin_qpairs": 0, 00:19:38.751 "io_qpairs": 0, 00:19:38.751 "current_admin_qpairs": 0, 00:19:38.751 "current_io_qpairs": 0, 00:19:38.751 "pending_bdev_io": 0, 00:19:38.751 "completed_nvme_io": 0, 00:19:38.751 "transports": [ 00:19:38.751 { 00:19:38.751 "trtype": "TCP" 00:19:38.751 } 00:19:38.751 ] 00:19:38.751 }, 00:19:38.751 { 00:19:38.751 "name": "nvmf_tgt_poll_group_003", 00:19:38.751 "admin_qpairs": 0, 00:19:38.751 "io_qpairs": 0, 00:19:38.751 "current_admin_qpairs": 0, 00:19:38.751 "current_io_qpairs": 0, 00:19:38.751 "pending_bdev_io": 0, 00:19:38.751 "completed_nvme_io": 0, 00:19:38.751 "transports": [ 00:19:38.751 { 00:19:38.751 "trtype": "TCP" 00:19:38.751 } 00:19:38.751 ] 00:19:38.751 } 00:19:38.751 ] 00:19:38.751 }' 00:19:38.751 19:14:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:38.751 19:14:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:38.751 19:14:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:38.751 19:14:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:38.751 19:14:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3345097 00:19:47.055 Initializing NVMe Controllers 00:19:47.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:47.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:47.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:47.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:47.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:47.055 Initialization complete. Launching workers. 00:19:47.055 ======================================================== 00:19:47.055 Latency(us) 00:19:47.055 Device Information : IOPS MiB/s Average min max 00:19:47.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6546.10 25.57 9812.96 1793.76 55820.22 00:19:47.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5522.70 21.57 11589.39 2172.53 55599.06 00:19:47.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6227.80 24.33 10281.19 1868.41 54296.40 00:19:47.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8031.60 31.37 7972.01 1410.46 54346.04 00:19:47.055 ======================================================== 00:19:47.055 Total : 26328.19 102.84 9734.75 1410.46 55820.22 00:19:47.055 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.055 rmmod nvme_tcp 00:19:47.055 rmmod nvme_fabrics 00:19:47.055 rmmod nvme_keyring 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3344937 ']' 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3344937 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3344937 ']' 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3344937 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3344937 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3344937' 00:19:47.055 killing process with pid 3344937 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3344937 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3344937 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.055 19:14:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.367 19:14:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.367 19:14:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:50.367 00:19:50.367 real 0m45.511s 00:19:50.367 user 2m32.027s 00:19:50.367 sys 0m13.301s 00:19:50.367 19:14:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:50.367 19:14:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.367 ************************************ 00:19:50.367 END TEST nvmf_perf_adq 00:19:50.367 ************************************ 00:19:50.367 19:14:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:50.367 19:14:30 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:50.367 19:14:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:50.367 19:14:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.367 19:14:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:50.367 ************************************ 00:19:50.367 START TEST nvmf_shutdown 00:19:50.367 ************************************ 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:50.367 * Looking for test storage... 00:19:50.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:50.367 ************************************ 00:19:50.367 START TEST nvmf_shutdown_tc1 00:19:50.367 ************************************ 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:50.367 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.368 19:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:52.272 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:52.272 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.272 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:52.273 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:52.273 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.273 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:19:52.532 00:19:52.532 --- 10.0.0.2 ping statistics --- 00:19:52.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.532 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:19:52.532 00:19:52.532 --- 10.0.0.1 ping statistics --- 00:19:52.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.532 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:52.532 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3348386 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3348386 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3348386 ']' 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.533 19:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.533 [2024-07-15 19:14:32.881283] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:19:52.533 [2024-07-15 19:14:32.881376] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.533 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.533 [2024-07-15 19:14:32.955750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.791 [2024-07-15 19:14:33.067018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.791 [2024-07-15 19:14:33.067083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.791 [2024-07-15 19:14:33.067113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.791 [2024-07-15 19:14:33.067124] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.791 [2024-07-15 19:14:33.067134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.791 [2024-07-15 19:14:33.068899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.791 [2024-07-15 19:14:33.068933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.791 [2024-07-15 19:14:33.068978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:52.791 [2024-07-15 19:14:33.068982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.791 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:52.791 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:52.791 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.791 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:52.792 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:52.792 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.792 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:52.792 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.792 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.055 [2024-07-15 19:14:33.226815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.055 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.055 Malloc1 00:19:53.055 [2024-07-15 19:14:33.311003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.055 Malloc2 00:19:53.055 Malloc3 00:19:53.055 Malloc4 00:19:53.055 Malloc5 00:19:53.312 Malloc6 00:19:53.312 Malloc7 00:19:53.312 Malloc8 00:19:53.312 Malloc9 00:19:53.312 Malloc10 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3348567 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3348567 /var/tmp/bdevperf.sock 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3348567 ']' 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.572 { 00:19:53.572 "params": { 00:19:53.572 "name": "Nvme$subsystem", 00:19:53.572 "trtype": "$TEST_TRANSPORT", 00:19:53.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.572 "adrfam": "ipv4", 00:19:53.572 "trsvcid": "$NVMF_PORT", 00:19:53.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.572 "hdgst": ${hdgst:-false}, 00:19:53.572 "ddgst": ${ddgst:-false} 00:19:53.572 }, 00:19:53.572 "method": "bdev_nvme_attach_controller" 00:19:53.572 } 00:19:53.572 EOF 00:19:53.572 )") 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.572 { 00:19:53.572 "params": { 00:19:53.572 "name": "Nvme$subsystem", 00:19:53.572 "trtype": "$TEST_TRANSPORT", 00:19:53.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.572 "adrfam": "ipv4", 00:19:53.572 "trsvcid": "$NVMF_PORT", 00:19:53.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.572 "hdgst": ${hdgst:-false}, 00:19:53.572 "ddgst": ${ddgst:-false} 00:19:53.572 }, 00:19:53.572 "method": "bdev_nvme_attach_controller" 00:19:53.572 } 00:19:53.572 EOF 00:19:53.572 )") 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.572 { 00:19:53.572 "params": { 00:19:53.572 "name": "Nvme$subsystem", 00:19:53.572 "trtype": "$TEST_TRANSPORT", 00:19:53.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.572 "adrfam": "ipv4", 00:19:53.572 "trsvcid": "$NVMF_PORT", 00:19:53.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.572 "hdgst": ${hdgst:-false}, 00:19:53.572 "ddgst": ${ddgst:-false} 00:19:53.572 }, 00:19:53.572 "method": "bdev_nvme_attach_controller" 00:19:53.572 } 00:19:53.572 EOF 00:19:53.572 )") 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.572 { 00:19:53.572 "params": { 00:19:53.572 "name": "Nvme$subsystem", 00:19:53.572 "trtype": "$TEST_TRANSPORT", 00:19:53.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.572 "adrfam": "ipv4", 00:19:53.572 "trsvcid": "$NVMF_PORT", 00:19:53.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.572 "hdgst": ${hdgst:-false}, 00:19:53.572 "ddgst": ${ddgst:-false} 00:19:53.572 }, 00:19:53.572 "method": "bdev_nvme_attach_controller" 00:19:53.572 } 00:19:53.572 EOF 00:19:53.572 )") 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.572 { 00:19:53.572 "params": { 00:19:53.572 "name": "Nvme$subsystem", 00:19:53.572 "trtype": "$TEST_TRANSPORT", 00:19:53.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.572 "adrfam": "ipv4", 00:19:53.572 "trsvcid": "$NVMF_PORT", 00:19:53.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.572 "hdgst": ${hdgst:-false}, 00:19:53.572 "ddgst": ${ddgst:-false} 00:19:53.572 }, 00:19:53.572 "method": "bdev_nvme_attach_controller" 00:19:53.572 } 00:19:53.572 EOF 00:19:53.572 )") 00:19:53.572 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.573 { 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme$subsystem", 00:19:53.573 "trtype": "$TEST_TRANSPORT", 00:19:53.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "$NVMF_PORT", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.573 "hdgst": ${hdgst:-false}, 00:19:53.573 "ddgst": ${ddgst:-false} 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 } 00:19:53.573 EOF 00:19:53.573 )") 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.573 { 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme$subsystem", 00:19:53.573 "trtype": "$TEST_TRANSPORT", 00:19:53.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "$NVMF_PORT", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.573 "hdgst": ${hdgst:-false}, 00:19:53.573 "ddgst": ${ddgst:-false} 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 } 00:19:53.573 EOF 00:19:53.573 )") 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.573 { 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme$subsystem", 00:19:53.573 "trtype": "$TEST_TRANSPORT", 00:19:53.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "$NVMF_PORT", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.573 "hdgst": ${hdgst:-false}, 00:19:53.573 "ddgst": ${ddgst:-false} 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 } 00:19:53.573 EOF 00:19:53.573 )") 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.573 { 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme$subsystem", 00:19:53.573 "trtype": "$TEST_TRANSPORT", 00:19:53.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "$NVMF_PORT", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.573 "hdgst": ${hdgst:-false}, 00:19:53.573 "ddgst": ${ddgst:-false} 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 } 00:19:53.573 EOF 00:19:53.573 )") 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:53.573 { 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme$subsystem", 00:19:53.573 "trtype": "$TEST_TRANSPORT", 00:19:53.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "$NVMF_PORT", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:53.573 "hdgst": ${hdgst:-false}, 00:19:53.573 "ddgst": ${ddgst:-false} 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 } 00:19:53.573 EOF 00:19:53.573 )") 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:53.573 19:14:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme1", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.573 "hdgst": false, 00:19:53.573 "ddgst": false 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 },{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme2", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:53.573 "hdgst": false, 00:19:53.573 "ddgst": false 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 },{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme3", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:53.573 "hdgst": false, 00:19:53.573 "ddgst": false 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 },{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme4", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:53.573 "hdgst": false, 00:19:53.573 "ddgst": false 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 },{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme5", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:53.573 "hdgst": false, 00:19:53.573 "ddgst": false 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 },{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme6", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:53.573 "hdgst": false, 00:19:53.573 "ddgst": false 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 },{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme7", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:53.573 "hdgst": false, 00:19:53.573 "ddgst": false 00:19:53.573 }, 00:19:53.573 "method": "bdev_nvme_attach_controller" 00:19:53.573 },{ 00:19:53.573 "params": { 00:19:53.573 "name": "Nvme8", 00:19:53.573 "trtype": "tcp", 00:19:53.573 "traddr": "10.0.0.2", 00:19:53.573 "adrfam": "ipv4", 00:19:53.573 "trsvcid": "4420", 00:19:53.573 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:53.573 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:53.574 "hdgst": false, 00:19:53.574 "ddgst": false 00:19:53.574 }, 00:19:53.574 "method": "bdev_nvme_attach_controller" 00:19:53.574 },{ 00:19:53.574 "params": { 00:19:53.574 "name": "Nvme9", 00:19:53.574 "trtype": "tcp", 00:19:53.574 "traddr": "10.0.0.2", 00:19:53.574 "adrfam": "ipv4", 00:19:53.574 "trsvcid": "4420", 00:19:53.574 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:53.574 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:53.574 "hdgst": false, 00:19:53.574 "ddgst": false 00:19:53.574 }, 00:19:53.574 "method": "bdev_nvme_attach_controller" 00:19:53.574 },{ 00:19:53.574 "params": { 00:19:53.574 "name": "Nvme10", 00:19:53.574 "trtype": "tcp", 00:19:53.574 "traddr": "10.0.0.2", 00:19:53.574 "adrfam": "ipv4", 00:19:53.574 "trsvcid": "4420", 00:19:53.574 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:53.574 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:53.574 "hdgst": false, 00:19:53.574 "ddgst": false 00:19:53.574 }, 00:19:53.574 "method": "bdev_nvme_attach_controller" 00:19:53.574 }' 00:19:53.574 [2024-07-15 19:14:33.811188] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:19:53.574 [2024-07-15 19:14:33.811273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:53.574 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.574 [2024-07-15 19:14:33.875484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.574 [2024-07-15 19:14:33.985630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3348567 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:55.504 19:14:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:56.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3348567 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3348386 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.453 { 00:19:56.453 "params": { 00:19:56.453 "name": "Nvme$subsystem", 00:19:56.453 "trtype": "$TEST_TRANSPORT", 00:19:56.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.453 "adrfam": "ipv4", 00:19:56.453 "trsvcid": "$NVMF_PORT", 00:19:56.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.453 "hdgst": ${hdgst:-false}, 00:19:56.453 "ddgst": ${ddgst:-false} 00:19:56.453 }, 00:19:56.453 "method": "bdev_nvme_attach_controller" 00:19:56.453 } 00:19:56.453 EOF 00:19:56.453 )") 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.453 { 00:19:56.453 "params": { 00:19:56.453 "name": "Nvme$subsystem", 00:19:56.453 "trtype": "$TEST_TRANSPORT", 00:19:56.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.453 "adrfam": "ipv4", 00:19:56.453 "trsvcid": "$NVMF_PORT", 00:19:56.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.453 "hdgst": ${hdgst:-false}, 00:19:56.453 "ddgst": ${ddgst:-false} 00:19:56.453 }, 00:19:56.453 "method": "bdev_nvme_attach_controller" 00:19:56.453 } 00:19:56.453 EOF 00:19:56.453 )") 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.453 { 00:19:56.453 "params": { 00:19:56.453 "name": "Nvme$subsystem", 00:19:56.453 "trtype": "$TEST_TRANSPORT", 00:19:56.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.453 "adrfam": "ipv4", 00:19:56.453 "trsvcid": "$NVMF_PORT", 00:19:56.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.453 "hdgst": ${hdgst:-false}, 00:19:56.453 "ddgst": ${ddgst:-false} 00:19:56.453 }, 00:19:56.453 "method": "bdev_nvme_attach_controller" 00:19:56.453 } 00:19:56.453 EOF 00:19:56.453 )") 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.453 { 00:19:56.453 "params": { 00:19:56.453 "name": "Nvme$subsystem", 00:19:56.453 "trtype": "$TEST_TRANSPORT", 00:19:56.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.453 "adrfam": "ipv4", 00:19:56.453 "trsvcid": "$NVMF_PORT", 00:19:56.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.453 "hdgst": ${hdgst:-false}, 00:19:56.453 "ddgst": ${ddgst:-false} 00:19:56.453 }, 00:19:56.453 "method": "bdev_nvme_attach_controller" 00:19:56.453 } 00:19:56.453 EOF 00:19:56.453 )") 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.453 { 00:19:56.453 "params": { 00:19:56.453 "name": "Nvme$subsystem", 00:19:56.453 "trtype": "$TEST_TRANSPORT", 00:19:56.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.453 "adrfam": "ipv4", 00:19:56.453 "trsvcid": "$NVMF_PORT", 00:19:56.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.453 "hdgst": ${hdgst:-false}, 00:19:56.453 "ddgst": ${ddgst:-false} 00:19:56.453 }, 00:19:56.453 "method": "bdev_nvme_attach_controller" 00:19:56.453 } 00:19:56.453 EOF 00:19:56.453 )") 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.453 { 00:19:56.453 "params": { 00:19:56.453 "name": "Nvme$subsystem", 00:19:56.453 "trtype": "$TEST_TRANSPORT", 00:19:56.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.453 "adrfam": "ipv4", 00:19:56.453 "trsvcid": "$NVMF_PORT", 00:19:56.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.453 "hdgst": ${hdgst:-false}, 00:19:56.453 "ddgst": ${ddgst:-false} 00:19:56.453 }, 00:19:56.453 "method": "bdev_nvme_attach_controller" 00:19:56.453 } 00:19:56.453 EOF 00:19:56.453 )") 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.453 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.453 { 00:19:56.453 "params": { 00:19:56.453 "name": "Nvme$subsystem", 00:19:56.453 "trtype": "$TEST_TRANSPORT", 00:19:56.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.453 "adrfam": "ipv4", 00:19:56.453 "trsvcid": "$NVMF_PORT", 00:19:56.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.453 "hdgst": ${hdgst:-false}, 00:19:56.453 "ddgst": ${ddgst:-false} 00:19:56.453 }, 00:19:56.453 "method": "bdev_nvme_attach_controller" 00:19:56.453 } 00:19:56.453 EOF 00:19:56.453 )") 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.454 { 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme$subsystem", 00:19:56.454 "trtype": "$TEST_TRANSPORT", 00:19:56.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "$NVMF_PORT", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.454 "hdgst": ${hdgst:-false}, 00:19:56.454 "ddgst": ${ddgst:-false} 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 } 00:19:56.454 EOF 00:19:56.454 )") 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.454 { 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme$subsystem", 00:19:56.454 "trtype": "$TEST_TRANSPORT", 00:19:56.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "$NVMF_PORT", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.454 "hdgst": ${hdgst:-false}, 00:19:56.454 "ddgst": ${ddgst:-false} 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 } 00:19:56.454 EOF 00:19:56.454 )") 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.454 { 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme$subsystem", 00:19:56.454 "trtype": "$TEST_TRANSPORT", 00:19:56.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "$NVMF_PORT", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.454 "hdgst": ${hdgst:-false}, 00:19:56.454 "ddgst": ${ddgst:-false} 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 } 00:19:56.454 EOF 00:19:56.454 )") 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:56.454 19:14:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme1", 00:19:56.454 "trtype": "tcp", 00:19:56.454 "traddr": "10.0.0.2", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "4420", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.454 "hdgst": false, 00:19:56.454 "ddgst": false 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 },{ 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme2", 00:19:56.454 "trtype": "tcp", 00:19:56.454 "traddr": "10.0.0.2", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "4420", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:56.454 "hdgst": false, 00:19:56.454 "ddgst": false 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 },{ 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme3", 00:19:56.454 "trtype": "tcp", 00:19:56.454 "traddr": "10.0.0.2", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "4420", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:56.454 "hdgst": false, 00:19:56.454 "ddgst": false 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 },{ 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme4", 00:19:56.454 "trtype": "tcp", 00:19:56.454 "traddr": "10.0.0.2", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "4420", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:56.454 "hdgst": false, 00:19:56.454 "ddgst": false 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 },{ 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme5", 00:19:56.454 "trtype": "tcp", 00:19:56.454 "traddr": "10.0.0.2", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "4420", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:56.454 "hdgst": false, 00:19:56.454 "ddgst": false 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 },{ 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme6", 00:19:56.454 "trtype": "tcp", 00:19:56.454 "traddr": "10.0.0.2", 00:19:56.454 "adrfam": "ipv4", 00:19:56.454 "trsvcid": "4420", 00:19:56.454 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:56.454 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:56.454 "hdgst": false, 00:19:56.454 "ddgst": false 00:19:56.454 }, 00:19:56.454 "method": "bdev_nvme_attach_controller" 00:19:56.454 },{ 00:19:56.454 "params": { 00:19:56.454 "name": "Nvme7", 00:19:56.454 "trtype": "tcp", 00:19:56.455 "traddr": "10.0.0.2", 00:19:56.455 "adrfam": "ipv4", 00:19:56.455 "trsvcid": "4420", 00:19:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:56.455 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:56.455 "hdgst": false, 00:19:56.455 "ddgst": false 00:19:56.455 }, 00:19:56.455 "method": "bdev_nvme_attach_controller" 00:19:56.455 },{ 00:19:56.455 "params": { 00:19:56.455 "name": "Nvme8", 00:19:56.455 "trtype": "tcp", 00:19:56.455 "traddr": "10.0.0.2", 00:19:56.455 "adrfam": "ipv4", 00:19:56.455 "trsvcid": "4420", 00:19:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:56.455 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:56.455 "hdgst": false, 00:19:56.455 "ddgst": false 00:19:56.455 }, 00:19:56.455 "method": "bdev_nvme_attach_controller" 00:19:56.455 },{ 00:19:56.455 "params": { 00:19:56.455 "name": "Nvme9", 00:19:56.455 "trtype": "tcp", 00:19:56.455 "traddr": "10.0.0.2", 00:19:56.455 "adrfam": "ipv4", 00:19:56.455 "trsvcid": "4420", 00:19:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:56.455 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:56.455 "hdgst": false, 00:19:56.455 "ddgst": false 00:19:56.455 }, 00:19:56.455 "method": "bdev_nvme_attach_controller" 00:19:56.455 },{ 00:19:56.455 "params": { 00:19:56.455 "name": "Nvme10", 00:19:56.455 "trtype": "tcp", 00:19:56.455 "traddr": "10.0.0.2", 00:19:56.455 "adrfam": "ipv4", 00:19:56.455 "trsvcid": "4420", 00:19:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:56.455 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:56.455 "hdgst": false, 00:19:56.455 "ddgst": false 00:19:56.455 }, 00:19:56.455 "method": "bdev_nvme_attach_controller" 00:19:56.455 }' 00:19:56.455 [2024-07-15 19:14:36.828007] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:19:56.455 [2024-07-15 19:14:36.828086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348873 ] 00:19:56.455 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.712 [2024-07-15 19:14:36.893829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.712 [2024-07-15 19:14:37.003595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.613 Running I/O for 1 seconds... 00:19:59.549 00:19:59.549 Latency(us) 00:19:59.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.549 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme1n1 : 1.17 219.55 13.72 0.00 0.00 287775.10 22039.51 276513.37 00:19:59.549 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme2n1 : 1.15 223.10 13.94 0.00 0.00 276801.61 19709.35 246997.90 00:19:59.549 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme3n1 : 1.11 229.98 14.37 0.00 0.00 261772.71 17573.36 250104.79 00:19:59.549 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme4n1 : 1.16 220.66 13.79 0.00 0.00 266969.51 20097.71 273406.48 00:19:59.549 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme5n1 : 1.17 217.91 13.62 0.00 0.00 264524.04 20486.07 256318.58 00:19:59.549 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme6n1 : 1.17 218.77 13.67 0.00 0.00 256937.91 22233.69 256318.58 00:19:59.549 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme7n1 : 1.16 220.90 13.81 0.00 0.00 247300.36 22427.88 234570.33 00:19:59.549 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme8n1 : 1.20 212.56 13.29 0.00 0.00 252862.77 23301.69 257872.02 00:19:59.549 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme9n1 : 1.21 211.66 13.23 0.00 0.00 247997.44 21456.97 287387.50 00:19:59.549 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.549 Verification LBA range: start 0x0 length 0x400 00:19:59.549 Nvme10n1 : 1.27 303.02 18.94 0.00 0.00 170503.87 4854.52 260978.92 00:19:59.549 =================================================================================================================== 00:19:59.549 Total : 2278.10 142.38 0.00 0.00 249399.74 4854.52 287387.50 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.807 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.807 rmmod nvme_tcp 00:19:59.807 rmmod nvme_fabrics 00:20:00.064 rmmod nvme_keyring 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3348386 ']' 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3348386 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3348386 ']' 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3348386 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3348386 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3348386' 00:20:00.064 killing process with pid 3348386 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3348386 00:20:00.064 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3348386 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.630 19:14:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.536 00:20:02.536 real 0m12.202s 00:20:02.536 user 0m35.721s 00:20:02.536 sys 0m3.224s 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.536 ************************************ 00:20:02.536 END TEST nvmf_shutdown_tc1 00:20:02.536 ************************************ 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:02.536 ************************************ 00:20:02.536 START TEST nvmf_shutdown_tc2 00:20:02.536 ************************************ 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.536 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:02.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:02.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:02.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:02.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.537 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.798 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.798 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.798 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.798 19:14:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:20:02.798 00:20:02.798 --- 10.0.0.2 ping statistics --- 00:20:02.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.798 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:20:02.798 00:20:02.798 --- 10.0.0.1 ping statistics --- 00:20:02.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.798 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3349754 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3349754 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3349754 ']' 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.798 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.798 [2024-07-15 19:14:43.133798] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:02.798 [2024-07-15 19:14:43.133891] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.798 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.798 [2024-07-15 19:14:43.211600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.056 [2024-07-15 19:14:43.333434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.056 [2024-07-15 19:14:43.333503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.056 [2024-07-15 19:14:43.333520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.056 [2024-07-15 19:14:43.333533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.056 [2024-07-15 19:14:43.333544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.056 [2024-07-15 19:14:43.333645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.056 [2024-07-15 19:14:43.333671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.056 [2024-07-15 19:14:43.333741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:03.056 [2024-07-15 19:14:43.333743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.056 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.056 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:03.056 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.056 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.056 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.314 [2024-07-15 19:14:43.491780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.314 19:14:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.314 Malloc1 00:20:03.314 [2024-07-15 19:14:43.574970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.314 Malloc2 00:20:03.314 Malloc3 00:20:03.314 Malloc4 00:20:03.572 Malloc5 00:20:03.572 Malloc6 00:20:03.572 Malloc7 00:20:03.572 Malloc8 00:20:03.572 Malloc9 00:20:03.572 Malloc10 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3349936 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3349936 /var/tmp/bdevperf.sock 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3349936 ']' 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.831 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.832 { 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme$subsystem", 00:20:03.832 "trtype": "$TEST_TRANSPORT", 00:20:03.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "$NVMF_PORT", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.832 "hdgst": ${hdgst:-false}, 00:20:03.832 "ddgst": ${ddgst:-false} 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 } 00:20:03.832 EOF 00:20:03.832 )") 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:03.832 19:14:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme1", 00:20:03.832 "trtype": "tcp", 00:20:03.832 "traddr": "10.0.0.2", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "4420", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.832 "hdgst": false, 00:20:03.832 "ddgst": false 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 },{ 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme2", 00:20:03.832 "trtype": "tcp", 00:20:03.832 "traddr": "10.0.0.2", 00:20:03.832 "adrfam": "ipv4", 00:20:03.832 "trsvcid": "4420", 00:20:03.832 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.832 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:03.832 "hdgst": false, 00:20:03.832 "ddgst": false 00:20:03.832 }, 00:20:03.832 "method": "bdev_nvme_attach_controller" 00:20:03.832 },{ 00:20:03.832 "params": { 00:20:03.832 "name": "Nvme3", 00:20:03.832 "trtype": "tcp", 00:20:03.832 "traddr": "10.0.0.2", 00:20:03.832 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 },{ 00:20:03.833 "params": { 00:20:03.833 "name": "Nvme4", 00:20:03.833 "trtype": "tcp", 00:20:03.833 "traddr": "10.0.0.2", 00:20:03.833 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 },{ 00:20:03.833 "params": { 00:20:03.833 "name": "Nvme5", 00:20:03.833 "trtype": "tcp", 00:20:03.833 "traddr": "10.0.0.2", 00:20:03.833 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 },{ 00:20:03.833 "params": { 00:20:03.833 "name": "Nvme6", 00:20:03.833 "trtype": "tcp", 00:20:03.833 "traddr": "10.0.0.2", 00:20:03.833 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 },{ 00:20:03.833 "params": { 00:20:03.833 "name": "Nvme7", 00:20:03.833 "trtype": "tcp", 00:20:03.833 "traddr": "10.0.0.2", 00:20:03.833 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 },{ 00:20:03.833 "params": { 00:20:03.833 "name": "Nvme8", 00:20:03.833 "trtype": "tcp", 00:20:03.833 "traddr": "10.0.0.2", 00:20:03.833 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 },{ 00:20:03.833 "params": { 00:20:03.833 "name": "Nvme9", 00:20:03.833 "trtype": "tcp", 00:20:03.833 "traddr": "10.0.0.2", 00:20:03.833 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 },{ 00:20:03.833 "params": { 00:20:03.833 "name": "Nvme10", 00:20:03.833 "trtype": "tcp", 00:20:03.833 "traddr": "10.0.0.2", 00:20:03.833 "adrfam": "ipv4", 00:20:03.833 "trsvcid": "4420", 00:20:03.833 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:03.833 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:03.833 "hdgst": false, 00:20:03.833 "ddgst": false 00:20:03.833 }, 00:20:03.833 "method": "bdev_nvme_attach_controller" 00:20:03.833 }' 00:20:03.833 [2024-07-15 19:14:44.077683] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:03.833 [2024-07-15 19:14:44.077775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349936 ] 00:20:03.833 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.833 [2024-07-15 19:14:44.141731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.833 [2024-07-15 19:14:44.251429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.737 Running I/O for 10 seconds... 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:05.737 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.997 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:05.997 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:05.997 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:06.257 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3349936 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3349936 ']' 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3349936 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3349936 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3349936' 00:20:06.518 killing process with pid 3349936 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3349936 00:20:06.518 19:14:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3349936 00:20:06.518 Received shutdown signal, test time was about 0.999412 seconds 00:20:06.518 00:20:06.518 Latency(us) 00:20:06.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.518 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme1n1 : 0.93 205.96 12.87 0.00 0.00 307085.97 22622.06 260978.92 00:20:06.518 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme2n1 : 1.00 256.36 16.02 0.00 0.00 233468.97 35340.89 243891.01 00:20:06.518 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme3n1 : 0.96 266.85 16.68 0.00 0.00 227849.86 19612.25 259425.47 00:20:06.518 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme4n1 : 0.97 198.81 12.43 0.00 0.00 299729.60 39224.51 306028.85 00:20:06.518 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme5n1 : 0.97 263.10 16.44 0.00 0.00 222147.89 22039.51 262532.36 00:20:06.518 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme6n1 : 0.91 211.10 13.19 0.00 0.00 268954.48 20971.52 260978.92 00:20:06.518 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme7n1 : 0.92 208.53 13.03 0.00 0.00 266478.93 21262.79 265639.25 00:20:06.518 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme8n1 : 0.96 203.50 12.72 0.00 0.00 267201.86 7864.32 281173.71 00:20:06.518 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme9n1 : 0.95 202.22 12.64 0.00 0.00 264334.60 19612.25 267192.70 00:20:06.518 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.518 Verification LBA range: start 0x0 length 0x400 00:20:06.518 Nvme10n1 : 0.94 204.72 12.80 0.00 0.00 254678.41 18155.90 260978.92 00:20:06.518 =================================================================================================================== 00:20:06.518 Total : 2221.16 138.82 0.00 0.00 258176.44 7864.32 306028.85 00:20:07.086 19:14:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3349754 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.024 rmmod nvme_tcp 00:20:08.024 rmmod nvme_fabrics 00:20:08.024 rmmod nvme_keyring 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3349754 ']' 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3349754 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3349754 ']' 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3349754 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3349754 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3349754' 00:20:08.024 killing process with pid 3349754 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3349754 00:20:08.024 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3349754 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.592 19:14:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.497 00:20:10.497 real 0m8.005s 00:20:10.497 user 0m24.372s 00:20:10.497 sys 0m1.521s 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:10.497 ************************************ 00:20:10.497 END TEST nvmf_shutdown_tc2 00:20:10.497 ************************************ 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.497 19:14:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:10.784 ************************************ 00:20:10.784 START TEST nvmf_shutdown_tc3 00:20:10.784 ************************************ 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:10.784 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.784 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:10.785 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:10.785 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:10.785 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.785 19:14:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:10.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:20:10.785 00:20:10.785 --- 10.0.0.2 ping statistics --- 00:20:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.785 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:20:10.785 00:20:10.785 --- 10.0.0.1 ping statistics --- 00:20:10.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.785 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3350845 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3350845 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3350845 ']' 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.785 19:14:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.785 [2024-07-15 19:14:51.182140] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:10.785 [2024-07-15 19:14:51.182223] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.043 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.043 [2024-07-15 19:14:51.251071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.043 [2024-07-15 19:14:51.368325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.043 [2024-07-15 19:14:51.368388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.043 [2024-07-15 19:14:51.368416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.043 [2024-07-15 19:14:51.368429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.043 [2024-07-15 19:14:51.368440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.043 [2024-07-15 19:14:51.368533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.043 [2024-07-15 19:14:51.368649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.043 [2024-07-15 19:14:51.368713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:11.043 [2024-07-15 19:14:51.368715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.976 [2024-07-15 19:14:52.154633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:11.976 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:11.977 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.977 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.977 Malloc1 00:20:11.977 [2024-07-15 19:14:52.244136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.977 Malloc2 00:20:11.977 Malloc3 00:20:11.977 Malloc4 00:20:12.234 Malloc5 00:20:12.234 Malloc6 00:20:12.234 Malloc7 00:20:12.234 Malloc8 00:20:12.234 Malloc9 00:20:12.234 Malloc10 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3351040 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3351040 /var/tmp/bdevperf.sock 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3351040 ']' 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.493 { 00:20:12.493 "params": { 00:20:12.493 "name": "Nvme$subsystem", 00:20:12.493 "trtype": "$TEST_TRANSPORT", 00:20:12.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.493 "adrfam": "ipv4", 00:20:12.493 "trsvcid": "$NVMF_PORT", 00:20:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.493 "hdgst": ${hdgst:-false}, 00:20:12.493 "ddgst": ${ddgst:-false} 00:20:12.493 }, 00:20:12.493 "method": "bdev_nvme_attach_controller" 00:20:12.493 } 00:20:12.493 EOF 00:20:12.493 )") 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.493 { 00:20:12.493 "params": { 00:20:12.493 "name": "Nvme$subsystem", 00:20:12.493 "trtype": "$TEST_TRANSPORT", 00:20:12.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.493 "adrfam": "ipv4", 00:20:12.493 "trsvcid": "$NVMF_PORT", 00:20:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.493 "hdgst": ${hdgst:-false}, 00:20:12.493 "ddgst": ${ddgst:-false} 00:20:12.493 }, 00:20:12.493 "method": "bdev_nvme_attach_controller" 00:20:12.493 } 00:20:12.493 EOF 00:20:12.493 )") 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.493 { 00:20:12.493 "params": { 00:20:12.493 "name": "Nvme$subsystem", 00:20:12.493 "trtype": "$TEST_TRANSPORT", 00:20:12.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.493 "adrfam": "ipv4", 00:20:12.493 "trsvcid": "$NVMF_PORT", 00:20:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.493 "hdgst": ${hdgst:-false}, 00:20:12.493 "ddgst": ${ddgst:-false} 00:20:12.493 }, 00:20:12.493 "method": "bdev_nvme_attach_controller" 00:20:12.493 } 00:20:12.493 EOF 00:20:12.493 )") 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.493 { 00:20:12.493 "params": { 00:20:12.493 "name": "Nvme$subsystem", 00:20:12.493 "trtype": "$TEST_TRANSPORT", 00:20:12.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.493 "adrfam": "ipv4", 00:20:12.493 "trsvcid": "$NVMF_PORT", 00:20:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.493 "hdgst": ${hdgst:-false}, 00:20:12.493 "ddgst": ${ddgst:-false} 00:20:12.493 }, 00:20:12.493 "method": "bdev_nvme_attach_controller" 00:20:12.493 } 00:20:12.493 EOF 00:20:12.493 )") 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.493 { 00:20:12.493 "params": { 00:20:12.493 "name": "Nvme$subsystem", 00:20:12.493 "trtype": "$TEST_TRANSPORT", 00:20:12.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.493 "adrfam": "ipv4", 00:20:12.493 "trsvcid": "$NVMF_PORT", 00:20:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.493 "hdgst": ${hdgst:-false}, 00:20:12.493 "ddgst": ${ddgst:-false} 00:20:12.493 }, 00:20:12.493 "method": "bdev_nvme_attach_controller" 00:20:12.493 } 00:20:12.493 EOF 00:20:12.493 )") 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.493 { 00:20:12.493 "params": { 00:20:12.493 "name": "Nvme$subsystem", 00:20:12.493 "trtype": "$TEST_TRANSPORT", 00:20:12.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.493 "adrfam": "ipv4", 00:20:12.493 "trsvcid": "$NVMF_PORT", 00:20:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.493 "hdgst": ${hdgst:-false}, 00:20:12.493 "ddgst": ${ddgst:-false} 00:20:12.493 }, 00:20:12.493 "method": "bdev_nvme_attach_controller" 00:20:12.493 } 00:20:12.493 EOF 00:20:12.493 )") 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.493 { 00:20:12.493 "params": { 00:20:12.493 "name": "Nvme$subsystem", 00:20:12.493 "trtype": "$TEST_TRANSPORT", 00:20:12.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.493 "adrfam": "ipv4", 00:20:12.493 "trsvcid": "$NVMF_PORT", 00:20:12.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.493 "hdgst": ${hdgst:-false}, 00:20:12.493 "ddgst": ${ddgst:-false} 00:20:12.493 }, 00:20:12.493 "method": "bdev_nvme_attach_controller" 00:20:12.493 } 00:20:12.493 EOF 00:20:12.493 )") 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.493 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.494 { 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme$subsystem", 00:20:12.494 "trtype": "$TEST_TRANSPORT", 00:20:12.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "$NVMF_PORT", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.494 "hdgst": ${hdgst:-false}, 00:20:12.494 "ddgst": ${ddgst:-false} 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 } 00:20:12.494 EOF 00:20:12.494 )") 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.494 { 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme$subsystem", 00:20:12.494 "trtype": "$TEST_TRANSPORT", 00:20:12.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "$NVMF_PORT", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.494 "hdgst": ${hdgst:-false}, 00:20:12.494 "ddgst": ${ddgst:-false} 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 } 00:20:12.494 EOF 00:20:12.494 )") 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:12.494 { 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme$subsystem", 00:20:12.494 "trtype": "$TEST_TRANSPORT", 00:20:12.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "$NVMF_PORT", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.494 "hdgst": ${hdgst:-false}, 00:20:12.494 "ddgst": ${ddgst:-false} 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 } 00:20:12.494 EOF 00:20:12.494 )") 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:12.494 19:14:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme1", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme2", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme3", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme4", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme5", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme6", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme7", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme8", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme9", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 },{ 00:20:12.494 "params": { 00:20:12.494 "name": "Nvme10", 00:20:12.494 "trtype": "tcp", 00:20:12.494 "traddr": "10.0.0.2", 00:20:12.494 "adrfam": "ipv4", 00:20:12.494 "trsvcid": "4420", 00:20:12.494 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:12.494 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:12.494 "hdgst": false, 00:20:12.494 "ddgst": false 00:20:12.494 }, 00:20:12.494 "method": "bdev_nvme_attach_controller" 00:20:12.494 }' 00:20:12.494 [2024-07-15 19:14:52.752792] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:12.494 [2024-07-15 19:14:52.752903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351040 ] 00:20:12.494 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.494 [2024-07-15 19:14:52.816207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.752 [2024-07-15 19:14:52.927434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.128 Running I/O for 10 seconds... 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:14.385 19:14:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:14.643 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:14.643 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:14.643 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:14.643 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.643 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:14.643 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:14.643 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3350845 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3350845 ']' 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3350845 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3350845 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3350845' 00:20:14.917 killing process with pid 3350845 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3350845 00:20:14.917 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3350845 00:20:14.917 [2024-07-15 19:14:55.108161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.917 [2024-07-15 19:14:55.108808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.108995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.109006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.109019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.109030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.109043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.109055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e1a0 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.110987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.918 [2024-07-15 19:14:55.111290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.918 [2024-07-15 19:14:55.111305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.918 [2024-07-15 19:14:55.111333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.918 [2024-07-15 19:14:55.111348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.918 [2024-07-15 19:14:55.111361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 19:14:55.111374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.918 the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.918 [2024-07-15 19:14:55.111404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.918 [2024-07-15 19:14:55.111417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6830 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.918 [2024-07-15 19:14:55.111940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.111964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f1940 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.115965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116873] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.919 [2024-07-15 19:14:55.116896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.116975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110e640 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.117746] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.919 [2024-07-15 19:14:55.124362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.919 [2024-07-15 19:14:55.124714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.124995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.125223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110eae0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.126998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.920 [2024-07-15 19:14:55.127010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110efa0 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.127991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.921 [2024-07-15 19:14:55.128660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.128671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.128683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.128694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.128706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f440 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.129987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.130500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110f900 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.132342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110240 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.138029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248abb0 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.138248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.922 [2024-07-15 19:14:55.138354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.922 [2024-07-15 19:14:55.138367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248a990 is same with the state(5) to be set 00:20:14.922 [2024-07-15 19:14:55.138415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2590350 is same with the state(5) to be set 00:20:14.923 [2024-07-15 19:14:55.138583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2462600 is same with the state(5) to be set 00:20:14.923 [2024-07-15 19:14:55.138758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8610 is same with the state(5) to be set 00:20:14.923 [2024-07-15 19:14:55.138946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.138982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.138995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c60 is same with the state(5) to be set 00:20:14.923 [2024-07-15 19:14:55.139109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592240 is same with the state(5) to be set 00:20:14.923 [2024-07-15 19:14:55.139271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6830 (9): Bad file descriptor 00:20:14.923 [2024-07-15 19:14:55.139323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f2450 is same with the state(5) to be set 00:20:14.923 [2024-07-15 19:14:55.139485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.923 [2024-07-15 19:14:55.139590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.139602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9280 is same with the state(5) to be set 00:20:14.923 [2024-07-15 19:14:55.140358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.923 [2024-07-15 19:14:55.140693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.923 [2024-07-15 19:14:55.140707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.140984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.140998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.141982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.141998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.142012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.924 [2024-07-15 19:14:55.142027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.924 [2024-07-15 19:14:55.142042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.142344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.142385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:14.925 [2024-07-15 19:14:55.142467] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2621390 was disconnected and freed. reset controller. 00:20:14.925 [2024-07-15 19:14:55.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.143984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.143999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.925 [2024-07-15 19:14:55.144015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.925 [2024-07-15 19:14:55.144029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.144982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.144996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.145272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.145358] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2623b10 was disconnected and freed. reset controller. 00:20:14.926 [2024-07-15 19:14:55.146956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.146982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.147005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.926 [2024-07-15 19:14:55.147021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.926 [2024-07-15 19:14:55.147038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.147983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.147999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.927 [2024-07-15 19:14:55.148320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.927 [2024-07-15 19:14:55.148336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.148932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.148946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.149031] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2622860 was disconnected and freed. reset controller. 00:20:14.928 [2024-07-15 19:14:55.150214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:14.928 [2024-07-15 19:14:55.150259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248abb0 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248a990 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2590350 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2462600 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec8610 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e8c60 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2592240 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2450 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.150501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9280 (9): Bad file descriptor 00:20:14.928 [2024-07-15 19:14:55.152040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:14.928 [2024-07-15 19:14:55.152126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.928 [2024-07-15 19:14:55.152678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.928 [2024-07-15 19:14:55.152692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.152980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.152996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.929 [2024-07-15 19:14:55.153961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.929 [2024-07-15 19:14:55.153976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.153991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.154006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.154020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.154036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.154050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.154066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.154080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.154096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.154123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245ad70 is same with the state(5) to be set 00:20:14.930 [2024-07-15 19:14:55.155378] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.930 [2024-07-15 19:14:55.156226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:14.930 [2024-07-15 19:14:55.156257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:14.930 [2024-07-15 19:14:55.156490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.930 [2024-07-15 19:14:55.156521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248abb0 with addr=10.0.0.2, port=4420 00:20:14.930 [2024-07-15 19:14:55.156538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248abb0 is same with the state(5) to be set 00:20:14.930 [2024-07-15 19:14:55.156705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.930 [2024-07-15 19:14:55.156730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248a990 with addr=10.0.0.2, port=4420 00:20:14.930 [2024-07-15 19:14:55.156747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248a990 is same with the state(5) to be set 00:20:14.930 [2024-07-15 19:14:55.156870] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.930 [2024-07-15 19:14:55.157005] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.930 [2024-07-15 19:14:55.157074] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:14.930 [2024-07-15 19:14:55.157511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.930 [2024-07-15 19:14:55.157539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2590350 with addr=10.0.0.2, port=4420 00:20:14.930 [2024-07-15 19:14:55.157555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2590350 is same with the state(5) to be set 00:20:14.930 [2024-07-15 19:14:55.157684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.930 [2024-07-15 19:14:55.157709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c6830 with addr=10.0.0.2, port=4420 00:20:14.930 [2024-07-15 19:14:55.157725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6830 is same with the state(5) to be set 00:20:14.930 [2024-07-15 19:14:55.157748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248abb0 (9): Bad file descriptor 00:20:14.930 [2024-07-15 19:14:55.157767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248a990 (9): Bad file descriptor 00:20:14.930 [2024-07-15 19:14:55.158140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2590350 (9): Bad file descriptor 00:20:14.930 [2024-07-15 19:14:55.158168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6830 (9): Bad file descriptor 00:20:14.930 [2024-07-15 19:14:55.158186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:14.930 [2024-07-15 19:14:55.158200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:14.930 [2024-07-15 19:14:55.158215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:14.930 [2024-07-15 19:14:55.158237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:14.930 [2024-07-15 19:14:55.158251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:14.930 [2024-07-15 19:14:55.158264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:14.930 [2024-07-15 19:14:55.158341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.930 [2024-07-15 19:14:55.158916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.930 [2024-07-15 19:14:55.158931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.158946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.158963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.158977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.158992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.159969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.159985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.931 [2024-07-15 19:14:55.160245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.931 [2024-07-15 19:14:55.160261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.160275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.160291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.160305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.160321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.160335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.160349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1b50 is same with the state(5) to be set 00:20:14.932 [2024-07-15 19:14:55.160425] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23c1b50 was disconnected and freed. reset controller. 00:20:14.932 [2024-07-15 19:14:55.160481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.932 [2024-07-15 19:14:55.160501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.932 [2024-07-15 19:14:55.160519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:14.932 [2024-07-15 19:14:55.160533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:14.932 [2024-07-15 19:14:55.160547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:14.932 [2024-07-15 19:14:55.160564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:14.932 [2024-07-15 19:14:55.160579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:14.932 [2024-07-15 19:14:55.160593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:14.932 [2024-07-15 19:14:55.160636] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.932 [2024-07-15 19:14:55.161868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.932 [2024-07-15 19:14:55.161907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.932 [2024-07-15 19:14:55.161942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:14.932 [2024-07-15 19:14:55.162003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.162972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.162987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.932 [2024-07-15 19:14:55.163003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.932 [2024-07-15 19:14:55.163017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.163965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.163980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25152f0 is same with the state(5) to be set 00:20:14.933 [2024-07-15 19:14:55.165214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.933 [2024-07-15 19:14:55.165550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.933 [2024-07-15 19:14:55.165566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.165975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.165989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.934 [2024-07-15 19:14:55.166501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.934 [2024-07-15 19:14:55.166517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.166981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.166995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.167010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.167024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.167040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.167054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.167070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.167085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.167100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.167114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.167130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.167144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.167169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.167183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.167198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2516780 is same with the state(5) to be set 00:20:14.935 [2024-07-15 19:14:55.168476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.168975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.168990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.169006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.169020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.169040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.935 [2024-07-15 19:14:55.169055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.935 [2024-07-15 19:14:55.169071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.169982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.169998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.936 [2024-07-15 19:14:55.170363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.936 [2024-07-15 19:14:55.170376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.170392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.170406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.170422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.170436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.170452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.170466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.170480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261e9f0 is same with the state(5) to be set 00:20:14.937 [2024-07-15 19:14:55.171722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.171778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.171810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.171846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.171882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.171924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.171953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.171983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.171997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.937 [2024-07-15 19:14:55.172874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.937 [2024-07-15 19:14:55.172895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.172922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.172936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.172952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.172966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.172982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.172996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.173719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.173734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x261fec0 is same with the state(5) to be set 00:20:14.938 [2024-07-15 19:14:55.175870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.175916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.175943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.175959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.175975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.175989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.176005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.176019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.176034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.176049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.176064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.176094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.176108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.938 [2024-07-15 19:14:55.176124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.938 [2024-07-15 19:14:55.176137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.176970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.176984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.939 [2024-07-15 19:14:55.177455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.939 [2024-07-15 19:14:55.177471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:14.940 [2024-07-15 19:14:55.177925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.940 [2024-07-15 19:14:55.177940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2624fc0 is same with the state(5) to be set 00:20:14.940 [2024-07-15 19:14:55.179802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:14.940 [2024-07-15 19:14:55.179834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:14.940 [2024-07-15 19:14:55.179853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:14.940 [2024-07-15 19:14:55.180196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.940 [2024-07-15 19:14:55.180226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8c60 with addr=10.0.0.2, port=4420 00:20:14.940 [2024-07-15 19:14:55.180243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8c60 is same with the state(5) to be set 00:20:14.940 [2024-07-15 19:14:55.180324] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.940 [2024-07-15 19:14:55.180349] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.940 [2024-07-15 19:14:55.180382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e8c60 (9): Bad file descriptor 00:20:14.940 [2024-07-15 19:14:55.180748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:14.940 task offset: 28416 on job bdev=Nvme7n1 fails 00:20:14.940 00:20:14.940 Latency(us) 00:20:14.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme1n1 ended in about 0.91 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme1n1 : 0.91 140.61 8.79 70.30 0.00 300083.83 22622.06 260978.92 00:20:14.940 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme2n1 ended in about 0.92 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme2n1 : 0.92 139.10 8.69 69.55 0.00 297239.64 21068.61 267192.70 00:20:14.940 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme3n1 ended in about 0.92 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme3n1 : 0.92 207.93 13.00 69.31 0.00 219076.27 15049.01 231463.44 00:20:14.940 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme4n1 ended in about 0.92 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme4n1 : 0.92 209.40 13.09 69.80 0.00 212851.86 19320.98 257872.02 00:20:14.940 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme5n1 ended in about 0.93 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme5n1 : 0.93 138.13 8.63 69.07 0.00 280992.17 21554.06 253211.69 00:20:14.940 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme6n1 ended in about 0.93 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme6n1 : 0.93 137.65 8.60 68.82 0.00 276059.53 22233.69 256318.58 00:20:14.940 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme7n1 ended in about 0.90 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme7n1 : 0.90 212.86 13.30 70.95 0.00 195536.97 5801.15 250104.79 00:20:14.940 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme8n1 ended in about 0.91 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme8n1 : 0.91 211.66 13.23 70.55 0.00 192389.50 7912.87 256318.58 00:20:14.940 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme9n1 ended in about 0.91 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme9n1 : 0.91 141.39 8.84 70.70 0.00 250127.30 11359.57 304475.40 00:20:14.940 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.940 Job: Nvme10n1 ended in about 0.93 seconds with error 00:20:14.940 Verification LBA range: start 0x0 length 0x400 00:20:14.940 Nvme10n1 : 0.93 141.31 8.83 68.51 0.00 248638.04 26408.58 282727.16 00:20:14.940 =================================================================================================================== 00:20:14.940 Total : 1680.04 105.00 697.57 0.00 242330.42 5801.15 304475.40 00:20:14.940 [2024-07-15 19:14:55.209158] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:14.940 [2024-07-15 19:14:55.209250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:14.940 [2024-07-15 19:14:55.209627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.940 [2024-07-15 19:14:55.209664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2592240 with addr=10.0.0.2, port=4420 00:20:14.940 [2024-07-15 19:14:55.209693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592240 is same with the state(5) to be set 00:20:14.940 [2024-07-15 19:14:55.209843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.940 [2024-07-15 19:14:55.209870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23f2450 with addr=10.0.0.2, port=4420 00:20:14.940 [2024-07-15 19:14:55.209895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f2450 is same with the state(5) to be set 00:20:14.940 [2024-07-15 19:14:55.210034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.940 [2024-07-15 19:14:55.210061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9280 with addr=10.0.0.2, port=4420 00:20:14.940 [2024-07-15 19:14:55.210077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9280 is same with the state(5) to be set 00:20:14.940 [2024-07-15 19:14:55.211513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:14.940 [2024-07-15 19:14:55.211542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:14.940 [2024-07-15 19:14:55.211562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:14.940 [2024-07-15 19:14:55.211579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:14.940 [2024-07-15 19:14:55.211770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.940 [2024-07-15 19:14:55.211799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec8610 with addr=10.0.0.2, port=4420 00:20:14.940 [2024-07-15 19:14:55.211816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8610 is same with the state(5) to be set 00:20:14.940 [2024-07-15 19:14:55.211969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.940 [2024-07-15 19:14:55.211996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2462600 with addr=10.0.0.2, port=4420 00:20:14.940 [2024-07-15 19:14:55.212011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2462600 is same with the state(5) to be set 00:20:14.940 [2024-07-15 19:14:55.212036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2592240 (9): Bad file descriptor 00:20:14.940 [2024-07-15 19:14:55.212057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2450 (9): Bad file descriptor 00:20:14.940 [2024-07-15 19:14:55.212089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9280 (9): Bad file descriptor 00:20:14.940 [2024-07-15 19:14:55.212106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:14.940 [2024-07-15 19:14:55.212119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:14.940 [2024-07-15 19:14:55.212135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:14.941 [2024-07-15 19:14:55.212208] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.941 [2024-07-15 19:14:55.212232] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.941 [2024-07-15 19:14:55.212257] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.941 [2024-07-15 19:14:55.212278] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:14.941 [2024-07-15 19:14:55.212381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.212535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.941 [2024-07-15 19:14:55.212563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248a990 with addr=10.0.0.2, port=4420 00:20:14.941 [2024-07-15 19:14:55.212579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248a990 is same with the state(5) to be set 00:20:14.941 [2024-07-15 19:14:55.212712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.941 [2024-07-15 19:14:55.212738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248abb0 with addr=10.0.0.2, port=4420 00:20:14.941 [2024-07-15 19:14:55.212753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248abb0 is same with the state(5) to be set 00:20:14.941 [2024-07-15 19:14:55.212901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.941 [2024-07-15 19:14:55.212931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c6830 with addr=10.0.0.2, port=4420 00:20:14.941 [2024-07-15 19:14:55.212947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6830 is same with the state(5) to be set 00:20:14.941 [2024-07-15 19:14:55.213079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:14.941 [2024-07-15 19:14:55.213104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2590350 with addr=10.0.0.2, port=4420 00:20:14.941 [2024-07-15 19:14:55.213120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2590350 is same with the state(5) to be set 00:20:14.941 [2024-07-15 19:14:55.213139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec8610 (9): Bad file descriptor 00:20:14.941 [2024-07-15 19:14:55.213157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2462600 (9): Bad file descriptor 00:20:14.941 [2024-07-15 19:14:55.213184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.213197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.213210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:14.941 [2024-07-15 19:14:55.213228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.213242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.213255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:14.941 [2024-07-15 19:14:55.213277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.213291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.213304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:14.941 [2024-07-15 19:14:55.213384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.213405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.213417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.213433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248a990 (9): Bad file descriptor 00:20:14.941 [2024-07-15 19:14:55.213452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248abb0 (9): Bad file descriptor 00:20:14.941 [2024-07-15 19:14:55.213469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6830 (9): Bad file descriptor 00:20:14.941 [2024-07-15 19:14:55.213487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2590350 (9): Bad file descriptor 00:20:14.941 [2024-07-15 19:14:55.213503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.213515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.213528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:14.941 [2024-07-15 19:14:55.213545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.213559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.213574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:14.941 [2024-07-15 19:14:55.213871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.213903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.213916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.213940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.213952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:14.941 [2024-07-15 19:14:55.213970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.213984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.213998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:14.941 [2024-07-15 19:14:55.214014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.214027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.214040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:14.941 [2024-07-15 19:14:55.214056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:14.941 [2024-07-15 19:14:55.214070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:14.941 [2024-07-15 19:14:55.214083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:14.941 [2024-07-15 19:14:55.214121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.214143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.214156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:14.941 [2024-07-15 19:14:55.214167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:15.509 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:15.509 19:14:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3351040 00:20:16.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3351040) - No such process 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:16.447 rmmod nvme_tcp 00:20:16.447 rmmod nvme_fabrics 00:20:16.447 rmmod nvme_keyring 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.447 19:14:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.980 19:14:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.980 00:20:18.980 real 0m7.886s 00:20:18.980 user 0m19.524s 00:20:18.980 sys 0m1.437s 00:20:18.980 19:14:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.980 19:14:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.980 ************************************ 00:20:18.980 END TEST nvmf_shutdown_tc3 00:20:18.980 ************************************ 00:20:18.980 19:14:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:18.980 19:14:58 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:18.980 00:20:18.980 real 0m28.304s 00:20:18.980 user 1m19.708s 00:20:18.980 sys 0m6.318s 00:20:18.980 19:14:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.980 19:14:58 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:18.980 ************************************ 00:20:18.980 END TEST nvmf_shutdown 00:20:18.980 ************************************ 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:18.980 19:14:58 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.980 19:14:58 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.980 19:14:58 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:18.980 19:14:58 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.980 19:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.980 ************************************ 00:20:18.980 START TEST nvmf_multicontroller 00:20:18.980 ************************************ 00:20:18.980 19:14:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:18.980 * Looking for test storage... 00:20:18.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:18.980 19:14:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.980 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:18.980 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.980 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.980 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.981 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.981 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.981 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.981 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.981 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.981 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.981 19:14:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.981 19:14:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.883 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:20.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:20.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:20.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:20.884 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:20:20.884 00:20:20.884 --- 10.0.0.2 ping statistics --- 00:20:20.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.884 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:20:20.884 00:20:20.884 --- 10.0.0.1 ping statistics --- 00:20:20.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.884 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:20.884 19:15:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3353609 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3353609 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3353609 ']' 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:20.884 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:20.884 [2024-07-15 19:15:01.048619] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:20.884 [2024-07-15 19:15:01.048694] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.884 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.884 [2024-07-15 19:15:01.113067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:20.884 [2024-07-15 19:15:01.222568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.884 [2024-07-15 19:15:01.222625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.884 [2024-07-15 19:15:01.222653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.884 [2024-07-15 19:15:01.222665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.884 [2024-07-15 19:15:01.222674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.884 [2024-07-15 19:15:01.222812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.884 [2024-07-15 19:15:01.222885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.884 [2024-07-15 19:15:01.222888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 [2024-07-15 19:15:01.362966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 Malloc0 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 [2024-07-15 19:15:01.426040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 [2024-07-15 19:15:01.433931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 Malloc1 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3353661 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3353661 /var/tmp/bdevperf.sock 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3353661 ']' 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.144 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.402 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.402 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:21.402 19:15:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:21.402 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.402 19:15:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.660 NVMe0n1 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.660 1 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.660 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.660 request: 00:20:21.660 { 00:20:21.660 "name": "NVMe0", 00:20:21.660 "trtype": "tcp", 00:20:21.660 "traddr": "10.0.0.2", 00:20:21.660 "adrfam": "ipv4", 00:20:21.660 "trsvcid": "4420", 00:20:21.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.660 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:21.661 "hostaddr": "10.0.0.2", 00:20:21.661 "hostsvcid": "60000", 00:20:21.661 "prchk_reftag": false, 00:20:21.661 "prchk_guard": false, 00:20:21.661 "hdgst": false, 00:20:21.661 "ddgst": false, 00:20:21.661 "method": "bdev_nvme_attach_controller", 00:20:21.661 "req_id": 1 00:20:21.661 } 00:20:21.661 Got JSON-RPC error response 00:20:21.661 response: 00:20:21.661 { 00:20:21.661 "code": -114, 00:20:21.661 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:21.661 } 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.661 request: 00:20:21.661 { 00:20:21.661 "name": "NVMe0", 00:20:21.661 "trtype": "tcp", 00:20:21.661 "traddr": "10.0.0.2", 00:20:21.661 "adrfam": "ipv4", 00:20:21.661 "trsvcid": "4420", 00:20:21.661 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.661 "hostaddr": "10.0.0.2", 00:20:21.661 "hostsvcid": "60000", 00:20:21.661 "prchk_reftag": false, 00:20:21.661 "prchk_guard": false, 00:20:21.661 "hdgst": false, 00:20:21.661 "ddgst": false, 00:20:21.661 "method": "bdev_nvme_attach_controller", 00:20:21.661 "req_id": 1 00:20:21.661 } 00:20:21.661 Got JSON-RPC error response 00:20:21.661 response: 00:20:21.661 { 00:20:21.661 "code": -114, 00:20:21.661 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:21.661 } 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.661 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:21.920 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.921 request: 00:20:21.921 { 00:20:21.921 "name": "NVMe0", 00:20:21.921 "trtype": "tcp", 00:20:21.921 "traddr": "10.0.0.2", 00:20:21.921 "adrfam": "ipv4", 00:20:21.921 "trsvcid": "4420", 00:20:21.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.921 "hostaddr": "10.0.0.2", 00:20:21.921 "hostsvcid": "60000", 00:20:21.921 "prchk_reftag": false, 00:20:21.921 "prchk_guard": false, 00:20:21.921 "hdgst": false, 00:20:21.921 "ddgst": false, 00:20:21.921 "multipath": "disable", 00:20:21.921 "method": "bdev_nvme_attach_controller", 00:20:21.921 "req_id": 1 00:20:21.921 } 00:20:21.921 Got JSON-RPC error response 00:20:21.921 response: 00:20:21.921 { 00:20:21.921 "code": -114, 00:20:21.921 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:21.921 } 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.921 request: 00:20:21.921 { 00:20:21.921 "name": "NVMe0", 00:20:21.921 "trtype": "tcp", 00:20:21.921 "traddr": "10.0.0.2", 00:20:21.921 "adrfam": "ipv4", 00:20:21.921 "trsvcid": "4420", 00:20:21.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.921 "hostaddr": "10.0.0.2", 00:20:21.921 "hostsvcid": "60000", 00:20:21.921 "prchk_reftag": false, 00:20:21.921 "prchk_guard": false, 00:20:21.921 "hdgst": false, 00:20:21.921 "ddgst": false, 00:20:21.921 "multipath": "failover", 00:20:21.921 "method": "bdev_nvme_attach_controller", 00:20:21.921 "req_id": 1 00:20:21.921 } 00:20:21.921 Got JSON-RPC error response 00:20:21.921 response: 00:20:21.921 { 00:20:21.921 "code": -114, 00:20:21.921 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:21.921 } 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.921 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.921 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:22.181 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:22.181 19:15:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:23.144 0 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3353661 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3353661 ']' 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3353661 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3353661 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3353661' 00:20:23.144 killing process with pid 3353661 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3353661 00:20:23.144 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3353661 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:23.402 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:23.402 [2024-07-15 19:15:01.534596] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:23.402 [2024-07-15 19:15:01.534700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353661 ] 00:20:23.402 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.402 [2024-07-15 19:15:01.601038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.402 [2024-07-15 19:15:01.711230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.402 [2024-07-15 19:15:02.375053] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e9ed791e-f137-4d1c-bacb-75f885e44c52 already exists 00:20:23.402 [2024-07-15 19:15:02.375095] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e9ed791e-f137-4d1c-bacb-75f885e44c52 alias for bdev NVMe1n1 00:20:23.402 [2024-07-15 19:15:02.375111] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:23.402 Running I/O for 1 seconds... 00:20:23.402 00:20:23.402 Latency(us) 00:20:23.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.402 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:23.402 NVMe0n1 : 1.01 18996.81 74.21 0.00 0.00 6719.34 3835.07 14951.92 00:20:23.402 =================================================================================================================== 00:20:23.402 Total : 18996.81 74.21 0.00 0.00 6719.34 3835.07 14951.92 00:20:23.402 Received shutdown signal, test time was about 1.000000 seconds 00:20:23.402 00:20:23.402 Latency(us) 00:20:23.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.402 =================================================================================================================== 00:20:23.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:23.402 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.402 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.663 rmmod nvme_tcp 00:20:23.663 rmmod nvme_fabrics 00:20:23.663 rmmod nvme_keyring 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3353609 ']' 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3353609 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3353609 ']' 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3353609 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3353609 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3353609' 00:20:23.663 killing process with pid 3353609 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3353609 00:20:23.663 19:15:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3353609 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.922 19:15:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.492 19:15:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.492 00:20:26.492 real 0m7.357s 00:20:26.492 user 0m11.901s 00:20:26.492 sys 0m2.149s 00:20:26.492 19:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.492 19:15:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:26.492 ************************************ 00:20:26.492 END TEST nvmf_multicontroller 00:20:26.492 ************************************ 00:20:26.492 19:15:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:26.492 19:15:06 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:26.492 19:15:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.492 19:15:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.492 19:15:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.492 ************************************ 00:20:26.492 START TEST nvmf_aer 00:20:26.492 ************************************ 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:26.492 * Looking for test storage... 00:20:26.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.492 19:15:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.394 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:28.395 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:28.395 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:28.395 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:28.395 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:20:28.395 00:20:28.395 --- 10.0.0.2 ping statistics --- 00:20:28.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.395 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:28.395 00:20:28.395 --- 10.0.0.1 ping statistics --- 00:20:28.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.395 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3356305 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3356305 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3356305 ']' 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.395 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.395 [2024-07-15 19:15:08.606249] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:28.395 [2024-07-15 19:15:08.606335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.395 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.395 [2024-07-15 19:15:08.674178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.395 [2024-07-15 19:15:08.795920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.395 [2024-07-15 19:15:08.795974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.395 [2024-07-15 19:15:08.795990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.395 [2024-07-15 19:15:08.796003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.395 [2024-07-15 19:15:08.796014] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.395 [2024-07-15 19:15:08.796087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.395 [2024-07-15 19:15:08.796167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.395 [2024-07-15 19:15:08.799899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.395 [2024-07-15 19:15:08.799912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 [2024-07-15 19:15:08.961540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 Malloc0 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.654 19:15:08 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 [2024-07-15 19:15:09.012712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 [ 00:20:28.654 { 00:20:28.654 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:28.654 "subtype": "Discovery", 00:20:28.654 "listen_addresses": [], 00:20:28.654 "allow_any_host": true, 00:20:28.654 "hosts": [] 00:20:28.654 }, 00:20:28.654 { 00:20:28.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.654 "subtype": "NVMe", 00:20:28.654 "listen_addresses": [ 00:20:28.654 { 00:20:28.654 "trtype": "TCP", 00:20:28.654 "adrfam": "IPv4", 00:20:28.654 "traddr": "10.0.0.2", 00:20:28.654 "trsvcid": "4420" 00:20:28.654 } 00:20:28.654 ], 00:20:28.654 "allow_any_host": true, 00:20:28.654 "hosts": [], 00:20:28.654 "serial_number": "SPDK00000000000001", 00:20:28.654 "model_number": "SPDK bdev Controller", 00:20:28.654 "max_namespaces": 2, 00:20:28.654 "min_cntlid": 1, 00:20:28.654 "max_cntlid": 65519, 00:20:28.654 "namespaces": [ 00:20:28.654 { 00:20:28.654 "nsid": 1, 00:20:28.654 "bdev_name": "Malloc0", 00:20:28.654 "name": "Malloc0", 00:20:28.654 "nguid": "1B5C4C649E2340F5838249A62742F997", 00:20:28.654 "uuid": "1b5c4c64-9e23-40f5-8382-49a62742f997" 00:20:28.654 } 00:20:28.654 ] 00:20:28.654 } 00:20:28.654 ] 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3356548 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:28.654 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:28.654 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.912 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 Malloc1 00:20:29.170 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:29.170 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.170 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:29.171 [ 00:20:29.171 { 00:20:29.171 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:29.171 "subtype": "Discovery", 00:20:29.171 "listen_addresses": [], 00:20:29.171 "allow_any_host": true, 00:20:29.171 "hosts": [] 00:20:29.171 }, 00:20:29.171 { 00:20:29.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.171 "subtype": "NVMe", 00:20:29.171 "listen_addresses": [ 00:20:29.171 { 00:20:29.171 "trtype": "TCP", 00:20:29.171 "adrfam": "IPv4", 00:20:29.171 "traddr": "10.0.0.2", 00:20:29.171 "trsvcid": "4420" 00:20:29.171 } 00:20:29.171 ], 00:20:29.171 "allow_any_host": true, 00:20:29.171 "hosts": [], 00:20:29.171 "serial_number": "SPDK00000000000001", 00:20:29.171 "model_number": "SPDK bdev Controller", 00:20:29.171 "max_namespaces": 2, 00:20:29.171 "min_cntlid": 1, 00:20:29.171 "max_cntlid": 65519, 00:20:29.171 "namespaces": [ 00:20:29.171 { 00:20:29.171 "nsid": 1, 00:20:29.171 "bdev_name": "Malloc0", 00:20:29.171 "name": "Malloc0", 00:20:29.171 "nguid": "1B5C4C649E2340F5838249A62742F997", 00:20:29.171 "uuid": "1b5c4c64-9e23-40f5-8382-49a62742f997" 00:20:29.171 }, 00:20:29.171 { 00:20:29.171 "nsid": 2, 00:20:29.171 "bdev_name": "Malloc1", 00:20:29.171 "name": "Malloc1", 00:20:29.171 "nguid": "3B53DB7C7CFD4D44954569839FE29AD7", 00:20:29.171 "uuid": "3b53db7c-7cfd-4d44-9545-69839fe29ad7" 00:20:29.171 } 00:20:29.171 ] 00:20:29.171 } 00:20:29.171 ] 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3356548 00:20:29.171 Asynchronous Event Request test 00:20:29.171 Attaching to 10.0.0.2 00:20:29.171 Attached to 10.0.0.2 00:20:29.171 Registering asynchronous event callbacks... 00:20:29.171 Starting namespace attribute notice tests for all controllers... 00:20:29.171 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:29.171 aer_cb - Changed Namespace 00:20:29.171 Cleaning up... 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.171 rmmod nvme_tcp 00:20:29.171 rmmod nvme_fabrics 00:20:29.171 rmmod nvme_keyring 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3356305 ']' 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3356305 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3356305 ']' 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3356305 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3356305 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3356305' 00:20:29.171 killing process with pid 3356305 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3356305 00:20:29.171 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3356305 00:20:29.429 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.429 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.429 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.429 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.690 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.690 19:15:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.690 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.690 19:15:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.592 19:15:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.592 00:20:31.592 real 0m5.555s 00:20:31.592 user 0m4.737s 00:20:31.592 sys 0m1.949s 00:20:31.592 19:15:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:31.592 19:15:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:31.592 ************************************ 00:20:31.592 END TEST nvmf_aer 00:20:31.592 ************************************ 00:20:31.592 19:15:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:31.592 19:15:11 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:31.592 19:15:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:31.592 19:15:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:31.592 19:15:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.592 ************************************ 00:20:31.592 START TEST nvmf_async_init 00:20:31.592 ************************************ 00:20:31.592 19:15:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:31.592 * Looking for test storage... 00:20:31.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:31.593 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ebda7e9d5f624572bfd050f518a203ce 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.851 19:15:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.752 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.752 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.752 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.752 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.752 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:33.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:33.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:33.753 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:33.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.753 19:15:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:20:33.753 00:20:33.753 --- 10.0.0.2 ping statistics --- 00:20:33.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.753 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:33.753 00:20:33.753 --- 10.0.0.1 ping statistics --- 00:20:33.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.753 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3358500 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3358500 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3358500 ']' 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.753 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:33.753 [2024-07-15 19:15:14.148084] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:33.753 [2024-07-15 19:15:14.148160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.753 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.012 [2024-07-15 19:15:14.211156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.012 [2024-07-15 19:15:14.317615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.012 [2024-07-15 19:15:14.317681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.012 [2024-07-15 19:15:14.317695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.012 [2024-07-15 19:15:14.317705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.012 [2024-07-15 19:15:14.317715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.012 [2024-07-15 19:15:14.317741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.012 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.012 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:34.012 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.012 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.012 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 [2024-07-15 19:15:14.459230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 null0 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ebda7e9d5f624572bfd050f518a203ce 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.270 [2024-07-15 19:15:14.499485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.270 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 nvme0n1 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 [ 00:20:34.529 { 00:20:34.529 "name": "nvme0n1", 00:20:34.529 "aliases": [ 00:20:34.529 "ebda7e9d-5f62-4572-bfd0-50f518a203ce" 00:20:34.529 ], 00:20:34.529 "product_name": "NVMe disk", 00:20:34.529 "block_size": 512, 00:20:34.529 "num_blocks": 2097152, 00:20:34.529 "uuid": "ebda7e9d-5f62-4572-bfd0-50f518a203ce", 00:20:34.529 "assigned_rate_limits": { 00:20:34.529 "rw_ios_per_sec": 0, 00:20:34.529 "rw_mbytes_per_sec": 0, 00:20:34.529 "r_mbytes_per_sec": 0, 00:20:34.529 "w_mbytes_per_sec": 0 00:20:34.529 }, 00:20:34.529 "claimed": false, 00:20:34.529 "zoned": false, 00:20:34.529 "supported_io_types": { 00:20:34.529 "read": true, 00:20:34.529 "write": true, 00:20:34.529 "unmap": false, 00:20:34.529 "flush": true, 00:20:34.529 "reset": true, 00:20:34.529 "nvme_admin": true, 00:20:34.529 "nvme_io": true, 00:20:34.529 "nvme_io_md": false, 00:20:34.529 "write_zeroes": true, 00:20:34.529 "zcopy": false, 00:20:34.529 "get_zone_info": false, 00:20:34.529 "zone_management": false, 00:20:34.529 "zone_append": false, 00:20:34.529 "compare": true, 00:20:34.529 "compare_and_write": true, 00:20:34.529 "abort": true, 00:20:34.529 "seek_hole": false, 00:20:34.529 "seek_data": false, 00:20:34.529 "copy": true, 00:20:34.529 "nvme_iov_md": false 00:20:34.529 }, 00:20:34.529 "memory_domains": [ 00:20:34.529 { 00:20:34.529 "dma_device_id": "system", 00:20:34.529 "dma_device_type": 1 00:20:34.529 } 00:20:34.529 ], 00:20:34.529 "driver_specific": { 00:20:34.529 "nvme": [ 00:20:34.529 { 00:20:34.529 "trid": { 00:20:34.529 "trtype": "TCP", 00:20:34.529 "adrfam": "IPv4", 00:20:34.529 "traddr": "10.0.0.2", 00:20:34.529 "trsvcid": "4420", 00:20:34.529 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:34.529 }, 00:20:34.529 "ctrlr_data": { 00:20:34.529 "cntlid": 1, 00:20:34.529 "vendor_id": "0x8086", 00:20:34.529 "model_number": "SPDK bdev Controller", 00:20:34.529 "serial_number": "00000000000000000000", 00:20:34.529 "firmware_revision": "24.09", 00:20:34.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.529 "oacs": { 00:20:34.529 "security": 0, 00:20:34.529 "format": 0, 00:20:34.529 "firmware": 0, 00:20:34.529 "ns_manage": 0 00:20:34.529 }, 00:20:34.529 "multi_ctrlr": true, 00:20:34.529 "ana_reporting": false 00:20:34.529 }, 00:20:34.529 "vs": { 00:20:34.529 "nvme_version": "1.3" 00:20:34.529 }, 00:20:34.529 "ns_data": { 00:20:34.529 "id": 1, 00:20:34.529 "can_share": true 00:20:34.529 } 00:20:34.529 } 00:20:34.529 ], 00:20:34.529 "mp_policy": "active_passive" 00:20:34.529 } 00:20:34.529 } 00:20:34.529 ] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 [2024-07-15 19:15:14.752568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:34.529 [2024-07-15 19:15:14.752656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208d090 (9): Bad file descriptor 00:20:34.529 [2024-07-15 19:15:14.895044] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 [ 00:20:34.529 { 00:20:34.529 "name": "nvme0n1", 00:20:34.529 "aliases": [ 00:20:34.529 "ebda7e9d-5f62-4572-bfd0-50f518a203ce" 00:20:34.529 ], 00:20:34.529 "product_name": "NVMe disk", 00:20:34.529 "block_size": 512, 00:20:34.529 "num_blocks": 2097152, 00:20:34.529 "uuid": "ebda7e9d-5f62-4572-bfd0-50f518a203ce", 00:20:34.529 "assigned_rate_limits": { 00:20:34.529 "rw_ios_per_sec": 0, 00:20:34.529 "rw_mbytes_per_sec": 0, 00:20:34.529 "r_mbytes_per_sec": 0, 00:20:34.529 "w_mbytes_per_sec": 0 00:20:34.529 }, 00:20:34.529 "claimed": false, 00:20:34.529 "zoned": false, 00:20:34.529 "supported_io_types": { 00:20:34.529 "read": true, 00:20:34.529 "write": true, 00:20:34.529 "unmap": false, 00:20:34.529 "flush": true, 00:20:34.529 "reset": true, 00:20:34.529 "nvme_admin": true, 00:20:34.529 "nvme_io": true, 00:20:34.529 "nvme_io_md": false, 00:20:34.529 "write_zeroes": true, 00:20:34.529 "zcopy": false, 00:20:34.529 "get_zone_info": false, 00:20:34.529 "zone_management": false, 00:20:34.529 "zone_append": false, 00:20:34.529 "compare": true, 00:20:34.529 "compare_and_write": true, 00:20:34.529 "abort": true, 00:20:34.529 "seek_hole": false, 00:20:34.529 "seek_data": false, 00:20:34.529 "copy": true, 00:20:34.529 "nvme_iov_md": false 00:20:34.529 }, 00:20:34.529 "memory_domains": [ 00:20:34.529 { 00:20:34.529 "dma_device_id": "system", 00:20:34.529 "dma_device_type": 1 00:20:34.529 } 00:20:34.529 ], 00:20:34.529 "driver_specific": { 00:20:34.529 "nvme": [ 00:20:34.529 { 00:20:34.529 "trid": { 00:20:34.529 "trtype": "TCP", 00:20:34.529 "adrfam": "IPv4", 00:20:34.529 "traddr": "10.0.0.2", 00:20:34.529 "trsvcid": "4420", 00:20:34.529 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:34.529 }, 00:20:34.529 "ctrlr_data": { 00:20:34.529 "cntlid": 2, 00:20:34.529 "vendor_id": "0x8086", 00:20:34.529 "model_number": "SPDK bdev Controller", 00:20:34.529 "serial_number": "00000000000000000000", 00:20:34.529 "firmware_revision": "24.09", 00:20:34.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.529 "oacs": { 00:20:34.529 "security": 0, 00:20:34.529 "format": 0, 00:20:34.529 "firmware": 0, 00:20:34.529 "ns_manage": 0 00:20:34.529 }, 00:20:34.529 "multi_ctrlr": true, 00:20:34.529 "ana_reporting": false 00:20:34.529 }, 00:20:34.529 "vs": { 00:20:34.529 "nvme_version": "1.3" 00:20:34.529 }, 00:20:34.529 "ns_data": { 00:20:34.529 "id": 1, 00:20:34.529 "can_share": true 00:20:34.529 } 00:20:34.529 } 00:20:34.529 ], 00:20:34.529 "mp_policy": "active_passive" 00:20:34.529 } 00:20:34.529 } 00:20:34.529 ] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RJM6OLro51 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RJM6OLro51 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 [2024-07-15 19:15:14.949256] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:34.529 [2024-07-15 19:15:14.949433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RJM6OLro51 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.529 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.529 [2024-07-15 19:15:14.957290] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:34.788 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.788 19:15:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RJM6OLro51 00:20:34.788 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.788 19:15:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.788 [2024-07-15 19:15:14.965337] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.788 [2024-07-15 19:15:14.965402] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:34.788 nvme0n1 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.788 [ 00:20:34.788 { 00:20:34.788 "name": "nvme0n1", 00:20:34.788 "aliases": [ 00:20:34.788 "ebda7e9d-5f62-4572-bfd0-50f518a203ce" 00:20:34.788 ], 00:20:34.788 "product_name": "NVMe disk", 00:20:34.788 "block_size": 512, 00:20:34.788 "num_blocks": 2097152, 00:20:34.788 "uuid": "ebda7e9d-5f62-4572-bfd0-50f518a203ce", 00:20:34.788 "assigned_rate_limits": { 00:20:34.788 "rw_ios_per_sec": 0, 00:20:34.788 "rw_mbytes_per_sec": 0, 00:20:34.788 "r_mbytes_per_sec": 0, 00:20:34.788 "w_mbytes_per_sec": 0 00:20:34.788 }, 00:20:34.788 "claimed": false, 00:20:34.788 "zoned": false, 00:20:34.788 "supported_io_types": { 00:20:34.788 "read": true, 00:20:34.788 "write": true, 00:20:34.788 "unmap": false, 00:20:34.788 "flush": true, 00:20:34.788 "reset": true, 00:20:34.788 "nvme_admin": true, 00:20:34.788 "nvme_io": true, 00:20:34.788 "nvme_io_md": false, 00:20:34.788 "write_zeroes": true, 00:20:34.788 "zcopy": false, 00:20:34.788 "get_zone_info": false, 00:20:34.788 "zone_management": false, 00:20:34.788 "zone_append": false, 00:20:34.788 "compare": true, 00:20:34.788 "compare_and_write": true, 00:20:34.788 "abort": true, 00:20:34.788 "seek_hole": false, 00:20:34.788 "seek_data": false, 00:20:34.788 "copy": true, 00:20:34.788 "nvme_iov_md": false 00:20:34.788 }, 00:20:34.788 "memory_domains": [ 00:20:34.788 { 00:20:34.788 "dma_device_id": "system", 00:20:34.788 "dma_device_type": 1 00:20:34.788 } 00:20:34.788 ], 00:20:34.788 "driver_specific": { 00:20:34.788 "nvme": [ 00:20:34.788 { 00:20:34.788 "trid": { 00:20:34.788 "trtype": "TCP", 00:20:34.788 "adrfam": "IPv4", 00:20:34.788 "traddr": "10.0.0.2", 00:20:34.788 "trsvcid": "4421", 00:20:34.788 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:34.788 }, 00:20:34.788 "ctrlr_data": { 00:20:34.788 "cntlid": 3, 00:20:34.788 "vendor_id": "0x8086", 00:20:34.788 "model_number": "SPDK bdev Controller", 00:20:34.788 "serial_number": "00000000000000000000", 00:20:34.788 "firmware_revision": "24.09", 00:20:34.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.788 "oacs": { 00:20:34.788 "security": 0, 00:20:34.788 "format": 0, 00:20:34.788 "firmware": 0, 00:20:34.788 "ns_manage": 0 00:20:34.788 }, 00:20:34.788 "multi_ctrlr": true, 00:20:34.788 "ana_reporting": false 00:20:34.788 }, 00:20:34.788 "vs": { 00:20:34.788 "nvme_version": "1.3" 00:20:34.788 }, 00:20:34.788 "ns_data": { 00:20:34.788 "id": 1, 00:20:34.788 "can_share": true 00:20:34.788 } 00:20:34.788 } 00:20:34.788 ], 00:20:34.788 "mp_policy": "active_passive" 00:20:34.788 } 00:20:34.788 } 00:20:34.788 ] 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.RJM6OLro51 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:34.788 rmmod nvme_tcp 00:20:34.788 rmmod nvme_fabrics 00:20:34.788 rmmod nvme_keyring 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3358500 ']' 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3358500 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3358500 ']' 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3358500 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3358500 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3358500' 00:20:34.788 killing process with pid 3358500 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3358500 00:20:34.788 [2024-07-15 19:15:15.160073] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:34.788 [2024-07-15 19:15:15.160110] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:34.788 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3358500 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.047 19:15:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.580 19:15:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.580 00:20:37.580 real 0m5.518s 00:20:37.580 user 0m2.123s 00:20:37.580 sys 0m1.820s 00:20:37.580 19:15:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.580 19:15:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:37.580 ************************************ 00:20:37.580 END TEST nvmf_async_init 00:20:37.580 ************************************ 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:37.580 19:15:17 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.580 ************************************ 00:20:37.580 START TEST dma 00:20:37.580 ************************************ 00:20:37.580 19:15:17 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:37.580 * Looking for test storage... 00:20:37.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:37.580 19:15:17 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.580 19:15:17 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.580 19:15:17 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.580 19:15:17 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.580 19:15:17 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.580 19:15:17 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.580 19:15:17 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.580 19:15:17 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:37.580 19:15:17 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.580 19:15:17 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.580 19:15:17 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:37.580 19:15:17 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:37.580 00:20:37.580 real 0m0.067s 00:20:37.580 user 0m0.026s 00:20:37.580 sys 0m0.047s 00:20:37.580 19:15:17 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.580 19:15:17 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:37.580 ************************************ 00:20:37.580 END TEST dma 00:20:37.580 ************************************ 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:37.580 19:15:17 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.580 19:15:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.580 ************************************ 00:20:37.580 START TEST nvmf_identify 00:20:37.580 ************************************ 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:37.580 * Looking for test storage... 00:20:37.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.580 19:15:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.581 19:15:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:39.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:39.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:39.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:39.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.478 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:39.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:20:39.479 00:20:39.479 --- 10.0.0.2 ping statistics --- 00:20:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.479 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:20:39.479 00:20:39.479 --- 10.0.0.1 ping statistics --- 00:20:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.479 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3360623 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3360623 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3360623 ']' 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.479 19:15:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.479 [2024-07-15 19:15:19.765850] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:39.479 [2024-07-15 19:15:19.765939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.479 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.479 [2024-07-15 19:15:19.836517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.736 [2024-07-15 19:15:19.955244] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.736 [2024-07-15 19:15:19.955300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.736 [2024-07-15 19:15:19.955326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.736 [2024-07-15 19:15:19.955339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.736 [2024-07-15 19:15:19.955351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.736 [2024-07-15 19:15:19.955437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.736 [2024-07-15 19:15:19.955494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.736 [2024-07-15 19:15:19.955615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.737 [2024-07-15 19:15:19.955618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.737 [2024-07-15 19:15:20.102717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.737 Malloc0 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.737 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.994 [2024-07-15 19:15:20.178680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:39.994 [ 00:20:39.994 { 00:20:39.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:39.994 "subtype": "Discovery", 00:20:39.994 "listen_addresses": [ 00:20:39.994 { 00:20:39.994 "trtype": "TCP", 00:20:39.994 "adrfam": "IPv4", 00:20:39.994 "traddr": "10.0.0.2", 00:20:39.994 "trsvcid": "4420" 00:20:39.994 } 00:20:39.994 ], 00:20:39.994 "allow_any_host": true, 00:20:39.994 "hosts": [] 00:20:39.994 }, 00:20:39.994 { 00:20:39.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.994 "subtype": "NVMe", 00:20:39.994 "listen_addresses": [ 00:20:39.994 { 00:20:39.994 "trtype": "TCP", 00:20:39.994 "adrfam": "IPv4", 00:20:39.994 "traddr": "10.0.0.2", 00:20:39.994 "trsvcid": "4420" 00:20:39.994 } 00:20:39.994 ], 00:20:39.994 "allow_any_host": true, 00:20:39.994 "hosts": [], 00:20:39.994 "serial_number": "SPDK00000000000001", 00:20:39.994 "model_number": "SPDK bdev Controller", 00:20:39.994 "max_namespaces": 32, 00:20:39.994 "min_cntlid": 1, 00:20:39.994 "max_cntlid": 65519, 00:20:39.994 "namespaces": [ 00:20:39.994 { 00:20:39.994 "nsid": 1, 00:20:39.994 "bdev_name": "Malloc0", 00:20:39.994 "name": "Malloc0", 00:20:39.994 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:39.994 "eui64": "ABCDEF0123456789", 00:20:39.994 "uuid": "326d49ce-3318-472f-b05a-27b841ec4d8b" 00:20:39.994 } 00:20:39.994 ] 00:20:39.994 } 00:20:39.994 ] 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.994 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:39.994 [2024-07-15 19:15:20.222331] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:39.994 [2024-07-15 19:15:20.222383] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360719 ] 00:20:39.994 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.994 [2024-07-15 19:15:20.258139] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:39.994 [2024-07-15 19:15:20.258221] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:39.994 [2024-07-15 19:15:20.258231] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:39.994 [2024-07-15 19:15:20.258246] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:39.994 [2024-07-15 19:15:20.258256] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:39.994 [2024-07-15 19:15:20.261930] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:39.994 [2024-07-15 19:15:20.261996] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf29540 0 00:20:39.994 [2024-07-15 19:15:20.268896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:39.994 [2024-07-15 19:15:20.268918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:39.994 [2024-07-15 19:15:20.268934] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:39.994 [2024-07-15 19:15:20.268940] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:39.994 [2024-07-15 19:15:20.268990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.269003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.269010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.994 [2024-07-15 19:15:20.269026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:39.994 [2024-07-15 19:15:20.269052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.994 [2024-07-15 19:15:20.276894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.994 [2024-07-15 19:15:20.276912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.994 [2024-07-15 19:15:20.276919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.276937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.994 [2024-07-15 19:15:20.276956] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:39.994 [2024-07-15 19:15:20.276968] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:39.994 [2024-07-15 19:15:20.276982] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:39.994 [2024-07-15 19:15:20.277003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277012] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.994 [2024-07-15 19:15:20.277029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.994 [2024-07-15 19:15:20.277064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.994 [2024-07-15 19:15:20.277248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.994 [2024-07-15 19:15:20.277264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.994 [2024-07-15 19:15:20.277271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.994 [2024-07-15 19:15:20.277286] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:39.994 [2024-07-15 19:15:20.277299] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:39.994 [2024-07-15 19:15:20.277312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.994 [2024-07-15 19:15:20.277337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.994 [2024-07-15 19:15:20.277358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.994 [2024-07-15 19:15:20.277530] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.994 [2024-07-15 19:15:20.277541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.994 [2024-07-15 19:15:20.277548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.994 [2024-07-15 19:15:20.277563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:39.994 [2024-07-15 19:15:20.277576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:39.994 [2024-07-15 19:15:20.277588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.994 [2024-07-15 19:15:20.277611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.994 [2024-07-15 19:15:20.277632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.994 [2024-07-15 19:15:20.277770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.994 [2024-07-15 19:15:20.277785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.994 [2024-07-15 19:15:20.277791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.994 [2024-07-15 19:15:20.277806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:39.994 [2024-07-15 19:15:20.277828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.277844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.994 [2024-07-15 19:15:20.277854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.994 [2024-07-15 19:15:20.277888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.994 [2024-07-15 19:15:20.278020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.994 [2024-07-15 19:15:20.278032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.994 [2024-07-15 19:15:20.278039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.278045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.994 [2024-07-15 19:15:20.278053] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:39.994 [2024-07-15 19:15:20.278062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:39.994 [2024-07-15 19:15:20.278074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:39.994 [2024-07-15 19:15:20.278184] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:39.994 [2024-07-15 19:15:20.278192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:39.994 [2024-07-15 19:15:20.278205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.278213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.994 [2024-07-15 19:15:20.278219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.994 [2024-07-15 19:15:20.278245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.994 [2024-07-15 19:15:20.278267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.994 [2024-07-15 19:15:20.278461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.994 [2024-07-15 19:15:20.278477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.994 [2024-07-15 19:15:20.278484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.278490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.278498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:39.995 [2024-07-15 19:15:20.278515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.278524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.278530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.278540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.995 [2024-07-15 19:15:20.278560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.995 [2024-07-15 19:15:20.278692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.278707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.278714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.278720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.278728] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:39.995 [2024-07-15 19:15:20.278740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:39.995 [2024-07-15 19:15:20.278754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:39.995 [2024-07-15 19:15:20.278768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:39.995 [2024-07-15 19:15:20.278784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.278792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.278802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.995 [2024-07-15 19:15:20.278823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.995 [2024-07-15 19:15:20.279051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.995 [2024-07-15 19:15:20.279065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.995 [2024-07-15 19:15:20.279072] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.279079] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29540): datao=0, datal=4096, cccid=0 00:20:39.995 [2024-07-15 19:15:20.279087] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf893c0) on tqpair(0xf29540): expected_datao=0, payload_size=4096 00:20:39.995 [2024-07-15 19:15:20.279094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.279111] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.279121] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.320050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.320057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.320076] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:39.995 [2024-07-15 19:15:20.320090] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:39.995 [2024-07-15 19:15:20.320099] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:39.995 [2024-07-15 19:15:20.320107] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:39.995 [2024-07-15 19:15:20.320115] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:39.995 [2024-07-15 19:15:20.320123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:39.995 [2024-07-15 19:15:20.320137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:39.995 [2024-07-15 19:15:20.320149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320157] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320163] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.320175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.995 [2024-07-15 19:15:20.320197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.995 [2024-07-15 19:15:20.320334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.320347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.320353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.320372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.320395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.995 [2024-07-15 19:15:20.320405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.320426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.995 [2024-07-15 19:15:20.320436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.320457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.995 [2024-07-15 19:15:20.320466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.320487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.995 [2024-07-15 19:15:20.320496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:39.995 [2024-07-15 19:15:20.320515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:39.995 [2024-07-15 19:15:20.320527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.320545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.995 [2024-07-15 19:15:20.320567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf893c0, cid 0, qid 0 00:20:39.995 [2024-07-15 19:15:20.320578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89540, cid 1, qid 0 00:20:39.995 [2024-07-15 19:15:20.320585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf896c0, cid 2, qid 0 00:20:39.995 [2024-07-15 19:15:20.320593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.995 [2024-07-15 19:15:20.320600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf899c0, cid 4, qid 0 00:20:39.995 [2024-07-15 19:15:20.320790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.320805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.320812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf899c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.320831] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:39.995 [2024-07-15 19:15:20.320840] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:39.995 [2024-07-15 19:15:20.320858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.320868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.324887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.995 [2024-07-15 19:15:20.324917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf899c0, cid 4, qid 0 00:20:39.995 [2024-07-15 19:15:20.325115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.995 [2024-07-15 19:15:20.325127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.995 [2024-07-15 19:15:20.325134] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325140] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29540): datao=0, datal=4096, cccid=4 00:20:39.995 [2024-07-15 19:15:20.325148] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf899c0) on tqpair(0xf29540): expected_datao=0, payload_size=4096 00:20:39.995 [2024-07-15 19:15:20.325155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325165] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325173] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.325225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.325231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf899c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.325256] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:39.995 [2024-07-15 19:15:20.325297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.325318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.995 [2024-07-15 19:15:20.325329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.325351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.995 [2024-07-15 19:15:20.325378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf899c0, cid 4, qid 0 00:20:39.995 [2024-07-15 19:15:20.325390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89b40, cid 5, qid 0 00:20:39.995 [2024-07-15 19:15:20.325596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.995 [2024-07-15 19:15:20.325608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.995 [2024-07-15 19:15:20.325615] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325621] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29540): datao=0, datal=1024, cccid=4 00:20:39.995 [2024-07-15 19:15:20.325628] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf899c0) on tqpair(0xf29540): expected_datao=0, payload_size=1024 00:20:39.995 [2024-07-15 19:15:20.325636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325645] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325656] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.325674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.325680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.325687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89b40) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.366036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.366055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.366063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf899c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.366088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.366108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.995 [2024-07-15 19:15:20.366137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf899c0, cid 4, qid 0 00:20:39.995 [2024-07-15 19:15:20.366300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.995 [2024-07-15 19:15:20.366316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.995 [2024-07-15 19:15:20.366323] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366329] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29540): datao=0, datal=3072, cccid=4 00:20:39.995 [2024-07-15 19:15:20.366337] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf899c0) on tqpair(0xf29540): expected_datao=0, payload_size=3072 00:20:39.995 [2024-07-15 19:15:20.366344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366354] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366361] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.366410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.366417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf899c0) on tqpair=0xf29540 00:20:39.995 [2024-07-15 19:15:20.366438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf29540) 00:20:39.995 [2024-07-15 19:15:20.366457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.995 [2024-07-15 19:15:20.366485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf899c0, cid 4, qid 0 00:20:39.995 [2024-07-15 19:15:20.366637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.995 [2024-07-15 19:15:20.366649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.995 [2024-07-15 19:15:20.366656] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366662] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf29540): datao=0, datal=8, cccid=4 00:20:39.995 [2024-07-15 19:15:20.366669] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf899c0) on tqpair(0xf29540): expected_datao=0, payload_size=8 00:20:39.995 [2024-07-15 19:15:20.366677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366686] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.366693] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.407891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.995 [2024-07-15 19:15:20.407925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.995 [2024-07-15 19:15:20.407932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.995 [2024-07-15 19:15:20.407939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf899c0) on tqpair=0xf29540 00:20:39.995 ===================================================== 00:20:39.995 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:39.995 ===================================================== 00:20:39.995 Controller Capabilities/Features 00:20:39.995 ================================ 00:20:39.995 Vendor ID: 0000 00:20:39.995 Subsystem Vendor ID: 0000 00:20:39.995 Serial Number: .................... 00:20:39.995 Model Number: ........................................ 00:20:39.995 Firmware Version: 24.09 00:20:39.995 Recommended Arb Burst: 0 00:20:39.995 IEEE OUI Identifier: 00 00 00 00:20:39.995 Multi-path I/O 00:20:39.995 May have multiple subsystem ports: No 00:20:39.995 May have multiple controllers: No 00:20:39.995 Associated with SR-IOV VF: No 00:20:39.995 Max Data Transfer Size: 131072 00:20:39.995 Max Number of Namespaces: 0 00:20:39.995 Max Number of I/O Queues: 1024 00:20:39.995 NVMe Specification Version (VS): 1.3 00:20:39.995 NVMe Specification Version (Identify): 1.3 00:20:39.995 Maximum Queue Entries: 128 00:20:39.995 Contiguous Queues Required: Yes 00:20:39.995 Arbitration Mechanisms Supported 00:20:39.995 Weighted Round Robin: Not Supported 00:20:39.995 Vendor Specific: Not Supported 00:20:39.995 Reset Timeout: 15000 ms 00:20:39.995 Doorbell Stride: 4 bytes 00:20:39.995 NVM Subsystem Reset: Not Supported 00:20:39.995 Command Sets Supported 00:20:39.995 NVM Command Set: Supported 00:20:39.995 Boot Partition: Not Supported 00:20:39.995 Memory Page Size Minimum: 4096 bytes 00:20:39.995 Memory Page Size Maximum: 4096 bytes 00:20:39.995 Persistent Memory Region: Not Supported 00:20:39.995 Optional Asynchronous Events Supported 00:20:39.995 Namespace Attribute Notices: Not Supported 00:20:39.995 Firmware Activation Notices: Not Supported 00:20:39.995 ANA Change Notices: Not Supported 00:20:39.995 PLE Aggregate Log Change Notices: Not Supported 00:20:39.995 LBA Status Info Alert Notices: Not Supported 00:20:39.995 EGE Aggregate Log Change Notices: Not Supported 00:20:39.995 Normal NVM Subsystem Shutdown event: Not Supported 00:20:39.995 Zone Descriptor Change Notices: Not Supported 00:20:39.995 Discovery Log Change Notices: Supported 00:20:39.995 Controller Attributes 00:20:39.995 128-bit Host Identifier: Not Supported 00:20:39.995 Non-Operational Permissive Mode: Not Supported 00:20:39.995 NVM Sets: Not Supported 00:20:39.995 Read Recovery Levels: Not Supported 00:20:39.995 Endurance Groups: Not Supported 00:20:39.995 Predictable Latency Mode: Not Supported 00:20:39.995 Traffic Based Keep ALive: Not Supported 00:20:39.995 Namespace Granularity: Not Supported 00:20:39.995 SQ Associations: Not Supported 00:20:39.995 UUID List: Not Supported 00:20:39.995 Multi-Domain Subsystem: Not Supported 00:20:39.995 Fixed Capacity Management: Not Supported 00:20:39.995 Variable Capacity Management: Not Supported 00:20:39.995 Delete Endurance Group: Not Supported 00:20:39.995 Delete NVM Set: Not Supported 00:20:39.995 Extended LBA Formats Supported: Not Supported 00:20:39.995 Flexible Data Placement Supported: Not Supported 00:20:39.995 00:20:39.996 Controller Memory Buffer Support 00:20:39.996 ================================ 00:20:39.996 Supported: No 00:20:39.996 00:20:39.996 Persistent Memory Region Support 00:20:39.996 ================================ 00:20:39.996 Supported: No 00:20:39.996 00:20:39.996 Admin Command Set Attributes 00:20:39.996 ============================ 00:20:39.996 Security Send/Receive: Not Supported 00:20:39.996 Format NVM: Not Supported 00:20:39.996 Firmware Activate/Download: Not Supported 00:20:39.996 Namespace Management: Not Supported 00:20:39.996 Device Self-Test: Not Supported 00:20:39.996 Directives: Not Supported 00:20:39.996 NVMe-MI: Not Supported 00:20:39.996 Virtualization Management: Not Supported 00:20:39.996 Doorbell Buffer Config: Not Supported 00:20:39.996 Get LBA Status Capability: Not Supported 00:20:39.996 Command & Feature Lockdown Capability: Not Supported 00:20:39.996 Abort Command Limit: 1 00:20:39.996 Async Event Request Limit: 4 00:20:39.996 Number of Firmware Slots: N/A 00:20:39.996 Firmware Slot 1 Read-Only: N/A 00:20:39.996 Firmware Activation Without Reset: N/A 00:20:39.996 Multiple Update Detection Support: N/A 00:20:39.996 Firmware Update Granularity: No Information Provided 00:20:39.996 Per-Namespace SMART Log: No 00:20:39.996 Asymmetric Namespace Access Log Page: Not Supported 00:20:39.996 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:39.996 Command Effects Log Page: Not Supported 00:20:39.996 Get Log Page Extended Data: Supported 00:20:39.996 Telemetry Log Pages: Not Supported 00:20:39.996 Persistent Event Log Pages: Not Supported 00:20:39.996 Supported Log Pages Log Page: May Support 00:20:39.996 Commands Supported & Effects Log Page: Not Supported 00:20:39.996 Feature Identifiers & Effects Log Page:May Support 00:20:39.996 NVMe-MI Commands & Effects Log Page: May Support 00:20:39.996 Data Area 4 for Telemetry Log: Not Supported 00:20:39.996 Error Log Page Entries Supported: 128 00:20:39.996 Keep Alive: Not Supported 00:20:39.996 00:20:39.996 NVM Command Set Attributes 00:20:39.996 ========================== 00:20:39.996 Submission Queue Entry Size 00:20:39.996 Max: 1 00:20:39.996 Min: 1 00:20:39.996 Completion Queue Entry Size 00:20:39.996 Max: 1 00:20:39.996 Min: 1 00:20:39.996 Number of Namespaces: 0 00:20:39.996 Compare Command: Not Supported 00:20:39.996 Write Uncorrectable Command: Not Supported 00:20:39.996 Dataset Management Command: Not Supported 00:20:39.996 Write Zeroes Command: Not Supported 00:20:39.996 Set Features Save Field: Not Supported 00:20:39.996 Reservations: Not Supported 00:20:39.996 Timestamp: Not Supported 00:20:39.996 Copy: Not Supported 00:20:39.996 Volatile Write Cache: Not Present 00:20:39.996 Atomic Write Unit (Normal): 1 00:20:39.996 Atomic Write Unit (PFail): 1 00:20:39.996 Atomic Compare & Write Unit: 1 00:20:39.996 Fused Compare & Write: Supported 00:20:39.996 Scatter-Gather List 00:20:39.996 SGL Command Set: Supported 00:20:39.996 SGL Keyed: Supported 00:20:39.996 SGL Bit Bucket Descriptor: Not Supported 00:20:39.996 SGL Metadata Pointer: Not Supported 00:20:39.996 Oversized SGL: Not Supported 00:20:39.996 SGL Metadata Address: Not Supported 00:20:39.996 SGL Offset: Supported 00:20:39.996 Transport SGL Data Block: Not Supported 00:20:39.996 Replay Protected Memory Block: Not Supported 00:20:39.996 00:20:39.996 Firmware Slot Information 00:20:39.996 ========================= 00:20:39.996 Active slot: 0 00:20:39.996 00:20:39.996 00:20:39.996 Error Log 00:20:39.996 ========= 00:20:39.996 00:20:39.996 Active Namespaces 00:20:39.996 ================= 00:20:39.996 Discovery Log Page 00:20:39.996 ================== 00:20:39.996 Generation Counter: 2 00:20:39.996 Number of Records: 2 00:20:39.996 Record Format: 0 00:20:39.996 00:20:39.996 Discovery Log Entry 0 00:20:39.996 ---------------------- 00:20:39.996 Transport Type: 3 (TCP) 00:20:39.996 Address Family: 1 (IPv4) 00:20:39.996 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:39.996 Entry Flags: 00:20:39.996 Duplicate Returned Information: 1 00:20:39.996 Explicit Persistent Connection Support for Discovery: 1 00:20:39.996 Transport Requirements: 00:20:39.996 Secure Channel: Not Required 00:20:39.996 Port ID: 0 (0x0000) 00:20:39.996 Controller ID: 65535 (0xffff) 00:20:39.996 Admin Max SQ Size: 128 00:20:39.996 Transport Service Identifier: 4420 00:20:39.996 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:39.996 Transport Address: 10.0.0.2 00:20:39.996 Discovery Log Entry 1 00:20:39.996 ---------------------- 00:20:39.996 Transport Type: 3 (TCP) 00:20:39.996 Address Family: 1 (IPv4) 00:20:39.996 Subsystem Type: 2 (NVM Subsystem) 00:20:39.996 Entry Flags: 00:20:39.996 Duplicate Returned Information: 0 00:20:39.996 Explicit Persistent Connection Support for Discovery: 0 00:20:39.996 Transport Requirements: 00:20:39.996 Secure Channel: Not Required 00:20:39.996 Port ID: 0 (0x0000) 00:20:39.996 Controller ID: 65535 (0xffff) 00:20:39.996 Admin Max SQ Size: 128 00:20:39.996 Transport Service Identifier: 4420 00:20:39.996 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:39.996 Transport Address: 10.0.0.2 [2024-07-15 19:15:20.408064] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:39.996 [2024-07-15 19:15:20.408085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf893c0) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.408097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.996 [2024-07-15 19:15:20.408106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89540) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.408114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.996 [2024-07-15 19:15:20.408122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf896c0) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.408129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.996 [2024-07-15 19:15:20.408137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.408145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.996 [2024-07-15 19:15:20.408162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.408189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.408228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.408430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.408443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.408450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.408468] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.408492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.408518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.408667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.408682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.408689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.408703] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:39.996 [2024-07-15 19:15:20.408712] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:39.996 [2024-07-15 19:15:20.408728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.408748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.408758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.408779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.408982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.408998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.409005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.409030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.409056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.409077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.409222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.409237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.409243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.409266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.409292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.409312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.409443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.409455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.409461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409468] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.409484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.409510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.409530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.409667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.409682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.409688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.409711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409720] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.409741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.409762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.409899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.409912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.409919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.409941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.409957] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.409967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.409988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.410121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.410133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.410139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.410162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.410187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.410207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.410335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.410347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.410353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.410376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.410401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.410421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.410547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.410559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.410566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.410588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.410613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.410638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.410773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.410788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.410795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.410818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.410834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.410844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.410864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.411004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.411019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.411026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.411049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.411075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.411095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.411223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.411235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.411242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.411263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.411289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.411309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.411445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.411459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.411466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.411489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.411515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.411539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.411672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.411684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.411691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.411713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.411728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.411739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.411759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.415900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.996 [2024-07-15 19:15:20.415917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.996 [2024-07-15 19:15:20.415924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.415931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.996 [2024-07-15 19:15:20.415948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.415958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.996 [2024-07-15 19:15:20.415964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf29540) 00:20:39.996 [2024-07-15 19:15:20.415974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.996 [2024-07-15 19:15:20.415995] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf89840, cid 3, qid 0 00:20:39.996 [2024-07-15 19:15:20.416174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.997 [2024-07-15 19:15:20.416187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.997 [2024-07-15 19:15:20.416193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.997 [2024-07-15 19:15:20.416200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf89840) on tqpair=0xf29540 00:20:39.997 [2024-07-15 19:15:20.416213] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:40.257 00:20:40.257 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:40.257 [2024-07-15 19:15:20.453176] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:40.257 [2024-07-15 19:15:20.453222] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360770 ] 00:20:40.257 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.257 [2024-07-15 19:15:20.486100] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:40.257 [2024-07-15 19:15:20.486155] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:40.257 [2024-07-15 19:15:20.486165] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:40.257 [2024-07-15 19:15:20.486193] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:40.257 [2024-07-15 19:15:20.486207] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:40.257 [2024-07-15 19:15:20.489932] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:40.257 [2024-07-15 19:15:20.489974] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6b4540 0 00:20:40.257 [2024-07-15 19:15:20.490140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:40.257 [2024-07-15 19:15:20.490153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:40.257 [2024-07-15 19:15:20.490160] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:40.257 [2024-07-15 19:15:20.490166] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:40.257 [2024-07-15 19:15:20.490219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.490231] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.490238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.490251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:40.257 [2024-07-15 19:15:20.490289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.497906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.497923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.497931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.497937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.497951] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:40.257 [2024-07-15 19:15:20.497962] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:40.257 [2024-07-15 19:15:20.497971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:40.257 [2024-07-15 19:15:20.497989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.497997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.498015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.257 [2024-07-15 19:15:20.498038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.498209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.498222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.498229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.498243] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:40.257 [2024-07-15 19:15:20.498256] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:40.257 [2024-07-15 19:15:20.498268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.498292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.257 [2024-07-15 19:15:20.498313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.498459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.498474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.498497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.498513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:40.257 [2024-07-15 19:15:20.498527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:40.257 [2024-07-15 19:15:20.498540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.498565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.257 [2024-07-15 19:15:20.498600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.498750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.498763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.498769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.498784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:40.257 [2024-07-15 19:15:20.498817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.498832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.498842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.257 [2024-07-15 19:15:20.498862] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.499032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.499048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.499055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.499070] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:40.257 [2024-07-15 19:15:20.499078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:40.257 [2024-07-15 19:15:20.499091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:40.257 [2024-07-15 19:15:20.499211] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:40.257 [2024-07-15 19:15:20.499218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:40.257 [2024-07-15 19:15:20.499230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.499253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.257 [2024-07-15 19:15:20.499277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.499433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.499448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.499455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.499470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:40.257 [2024-07-15 19:15:20.499501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.499527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.257 [2024-07-15 19:15:20.499548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.499691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.499703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.499710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.499724] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:40.257 [2024-07-15 19:15:20.499732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:40.257 [2024-07-15 19:15:20.499745] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:40.257 [2024-07-15 19:15:20.499759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:40.257 [2024-07-15 19:15:20.499774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.499782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.257 [2024-07-15 19:15:20.499807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.257 [2024-07-15 19:15:20.499828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.257 [2024-07-15 19:15:20.500093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.257 [2024-07-15 19:15:20.500110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.257 [2024-07-15 19:15:20.500117] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.500124] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=4096, cccid=0 00:20:40.257 [2024-07-15 19:15:20.500131] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7143c0) on tqpair(0x6b4540): expected_datao=0, payload_size=4096 00:20:40.257 [2024-07-15 19:15:20.500139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.500149] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.500157] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.541016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.257 [2024-07-15 19:15:20.541036] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.257 [2024-07-15 19:15:20.541044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.257 [2024-07-15 19:15:20.541055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.257 [2024-07-15 19:15:20.541067] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:40.258 [2024-07-15 19:15:20.541080] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:40.258 [2024-07-15 19:15:20.541089] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:40.258 [2024-07-15 19:15:20.541096] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:40.258 [2024-07-15 19:15:20.541103] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:40.258 [2024-07-15 19:15:20.541111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.541126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.541138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.541179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.258 [2024-07-15 19:15:20.541203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.258 [2024-07-15 19:15:20.541353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.258 [2024-07-15 19:15:20.541368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.258 [2024-07-15 19:15:20.541375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.258 [2024-07-15 19:15:20.541392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.541415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.258 [2024-07-15 19:15:20.541424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.541446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.258 [2024-07-15 19:15:20.541455] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.541476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.258 [2024-07-15 19:15:20.541485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.541507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.258 [2024-07-15 19:15:20.541516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.541537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.541550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.541567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.258 [2024-07-15 19:15:20.541589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7143c0, cid 0, qid 0 00:20:40.258 [2024-07-15 19:15:20.541600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714540, cid 1, qid 0 00:20:40.258 [2024-07-15 19:15:20.541608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7146c0, cid 2, qid 0 00:20:40.258 [2024-07-15 19:15:20.541615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.258 [2024-07-15 19:15:20.541623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7149c0, cid 4, qid 0 00:20:40.258 [2024-07-15 19:15:20.541794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.258 [2024-07-15 19:15:20.541809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.258 [2024-07-15 19:15:20.541815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.541822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7149c0) on tqpair=0x6b4540 00:20:40.258 [2024-07-15 19:15:20.541830] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:40.258 [2024-07-15 19:15:20.541838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.541851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.545887] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.545906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.545914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.545920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.545930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.258 [2024-07-15 19:15:20.545953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7149c0, cid 4, qid 0 00:20:40.258 [2024-07-15 19:15:20.546112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.258 [2024-07-15 19:15:20.546124] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.258 [2024-07-15 19:15:20.546132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7149c0) on tqpair=0x6b4540 00:20:40.258 [2024-07-15 19:15:20.546216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.546235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.546249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.546283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.258 [2024-07-15 19:15:20.546308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7149c0, cid 4, qid 0 00:20:40.258 [2024-07-15 19:15:20.546557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.258 [2024-07-15 19:15:20.546573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.258 [2024-07-15 19:15:20.546581] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546587] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=4096, cccid=4 00:20:40.258 [2024-07-15 19:15:20.546595] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7149c0) on tqpair(0x6b4540): expected_datao=0, payload_size=4096 00:20:40.258 [2024-07-15 19:15:20.546603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546613] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546621] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.258 [2024-07-15 19:15:20.546673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.258 [2024-07-15 19:15:20.546681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7149c0) on tqpair=0x6b4540 00:20:40.258 [2024-07-15 19:15:20.546703] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:40.258 [2024-07-15 19:15:20.546721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.546738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.546752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.546760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.546771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.258 [2024-07-15 19:15:20.546807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7149c0, cid 4, qid 0 00:20:40.258 [2024-07-15 19:15:20.546994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.258 [2024-07-15 19:15:20.547008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.258 [2024-07-15 19:15:20.547015] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.547022] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=4096, cccid=4 00:20:40.258 [2024-07-15 19:15:20.547029] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7149c0) on tqpair(0x6b4540): expected_datao=0, payload_size=4096 00:20:40.258 [2024-07-15 19:15:20.547037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.547047] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.547055] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.547095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.258 [2024-07-15 19:15:20.547106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.258 [2024-07-15 19:15:20.547113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.547134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7149c0) on tqpair=0x6b4540 00:20:40.258 [2024-07-15 19:15:20.547153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.547172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:40.258 [2024-07-15 19:15:20.547201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.547213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b4540) 00:20:40.258 [2024-07-15 19:15:20.547223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.258 [2024-07-15 19:15:20.547244] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7149c0, cid 4, qid 0 00:20:40.258 [2024-07-15 19:15:20.547406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.258 [2024-07-15 19:15:20.547419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.258 [2024-07-15 19:15:20.547426] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.258 [2024-07-15 19:15:20.547432] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=4096, cccid=4 00:20:40.258 [2024-07-15 19:15:20.547439] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7149c0) on tqpair(0x6b4540): expected_datao=0, payload_size=4096 00:20:40.258 [2024-07-15 19:15:20.547446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547456] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547463] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.547534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.547541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7149c0) on tqpair=0x6b4540 00:20:40.259 [2024-07-15 19:15:20.547574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:40.259 [2024-07-15 19:15:20.547589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:40.259 [2024-07-15 19:15:20.547605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:40.259 [2024-07-15 19:15:20.547617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:40.259 [2024-07-15 19:15:20.547625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:40.259 [2024-07-15 19:15:20.547633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:40.259 [2024-07-15 19:15:20.547641] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:40.259 [2024-07-15 19:15:20.547649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:40.259 [2024-07-15 19:15:20.547657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:40.259 [2024-07-15 19:15:20.547676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.547694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.547705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.547727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.259 [2024-07-15 19:15:20.547751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7149c0, cid 4, qid 0 00:20:40.259 [2024-07-15 19:15:20.547781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714b40, cid 5, qid 0 00:20:40.259 [2024-07-15 19:15:20.547963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.547978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.547986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.547992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7149c0) on tqpair=0x6b4540 00:20:40.259 [2024-07-15 19:15:20.548003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.548013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.548019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714b40) on tqpair=0x6b4540 00:20:40.259 [2024-07-15 19:15:20.548043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.548077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.548098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714b40, cid 5, qid 0 00:20:40.259 [2024-07-15 19:15:20.548271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.548284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.548291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714b40) on tqpair=0x6b4540 00:20:40.259 [2024-07-15 19:15:20.548313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.548332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.548351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714b40, cid 5, qid 0 00:20:40.259 [2024-07-15 19:15:20.548501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.548514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.548521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714b40) on tqpair=0x6b4540 00:20:40.259 [2024-07-15 19:15:20.548544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.548563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.548583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714b40, cid 5, qid 0 00:20:40.259 [2024-07-15 19:15:20.548750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.548763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.548770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714b40) on tqpair=0x6b4540 00:20:40.259 [2024-07-15 19:15:20.548815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.548836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.548852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.548896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.548910] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.548927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.548939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.548946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6b4540) 00:20:40.259 [2024-07-15 19:15:20.548956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.259 [2024-07-15 19:15:20.548978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714b40, cid 5, qid 0 00:20:40.259 [2024-07-15 19:15:20.549005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7149c0, cid 4, qid 0 00:20:40.259 [2024-07-15 19:15:20.549014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714cc0, cid 6, qid 0 00:20:40.259 [2024-07-15 19:15:20.549021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714e40, cid 7, qid 0 00:20:40.259 [2024-07-15 19:15:20.549269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.259 [2024-07-15 19:15:20.549284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.259 [2024-07-15 19:15:20.549291] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549298] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=8192, cccid=5 00:20:40.259 [2024-07-15 19:15:20.549305] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x714b40) on tqpair(0x6b4540): expected_datao=0, payload_size=8192 00:20:40.259 [2024-07-15 19:15:20.549313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549392] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549402] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.259 [2024-07-15 19:15:20.549420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.259 [2024-07-15 19:15:20.549426] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549433] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=512, cccid=4 00:20:40.259 [2024-07-15 19:15:20.549440] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7149c0) on tqpair(0x6b4540): expected_datao=0, payload_size=512 00:20:40.259 [2024-07-15 19:15:20.549447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549456] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549463] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.259 [2024-07-15 19:15:20.549480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.259 [2024-07-15 19:15:20.549486] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549492] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=512, cccid=6 00:20:40.259 [2024-07-15 19:15:20.549514] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x714cc0) on tqpair(0x6b4540): expected_datao=0, payload_size=512 00:20:40.259 [2024-07-15 19:15:20.549526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549536] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549544] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:40.259 [2024-07-15 19:15:20.549576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:40.259 [2024-07-15 19:15:20.549583] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549589] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6b4540): datao=0, datal=4096, cccid=7 00:20:40.259 [2024-07-15 19:15:20.549597] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x714e40) on tqpair(0x6b4540): expected_datao=0, payload_size=4096 00:20:40.259 [2024-07-15 19:15:20.549604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549613] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549621] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.549641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.549647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.259 [2024-07-15 19:15:20.549654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714b40) on tqpair=0x6b4540 00:20:40.259 [2024-07-15 19:15:20.549673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.259 [2024-07-15 19:15:20.549684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.259 [2024-07-15 19:15:20.549690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.260 [2024-07-15 19:15:20.549697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7149c0) on tqpair=0x6b4540 00:20:40.260 [2024-07-15 19:15:20.549712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.260 [2024-07-15 19:15:20.549722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.260 [2024-07-15 19:15:20.549729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.260 [2024-07-15 19:15:20.549735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714cc0) on tqpair=0x6b4540 00:20:40.260 [2024-07-15 19:15:20.549745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.260 [2024-07-15 19:15:20.549755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.260 [2024-07-15 19:15:20.549761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.260 [2024-07-15 19:15:20.549768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714e40) on tqpair=0x6b4540 00:20:40.260 ===================================================== 00:20:40.260 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.260 ===================================================== 00:20:40.260 Controller Capabilities/Features 00:20:40.260 ================================ 00:20:40.260 Vendor ID: 8086 00:20:40.260 Subsystem Vendor ID: 8086 00:20:40.260 Serial Number: SPDK00000000000001 00:20:40.260 Model Number: SPDK bdev Controller 00:20:40.260 Firmware Version: 24.09 00:20:40.260 Recommended Arb Burst: 6 00:20:40.260 IEEE OUI Identifier: e4 d2 5c 00:20:40.260 Multi-path I/O 00:20:40.260 May have multiple subsystem ports: Yes 00:20:40.260 May have multiple controllers: Yes 00:20:40.260 Associated with SR-IOV VF: No 00:20:40.260 Max Data Transfer Size: 131072 00:20:40.260 Max Number of Namespaces: 32 00:20:40.260 Max Number of I/O Queues: 127 00:20:40.260 NVMe Specification Version (VS): 1.3 00:20:40.260 NVMe Specification Version (Identify): 1.3 00:20:40.260 Maximum Queue Entries: 128 00:20:40.260 Contiguous Queues Required: Yes 00:20:40.260 Arbitration Mechanisms Supported 00:20:40.260 Weighted Round Robin: Not Supported 00:20:40.260 Vendor Specific: Not Supported 00:20:40.260 Reset Timeout: 15000 ms 00:20:40.260 Doorbell Stride: 4 bytes 00:20:40.260 NVM Subsystem Reset: Not Supported 00:20:40.260 Command Sets Supported 00:20:40.260 NVM Command Set: Supported 00:20:40.260 Boot Partition: Not Supported 00:20:40.260 Memory Page Size Minimum: 4096 bytes 00:20:40.260 Memory Page Size Maximum: 4096 bytes 00:20:40.260 Persistent Memory Region: Not Supported 00:20:40.260 Optional Asynchronous Events Supported 00:20:40.260 Namespace Attribute Notices: Supported 00:20:40.260 Firmware Activation Notices: Not Supported 00:20:40.260 ANA Change Notices: Not Supported 00:20:40.260 PLE Aggregate Log Change Notices: Not Supported 00:20:40.260 LBA Status Info Alert Notices: Not Supported 00:20:40.260 EGE Aggregate Log Change Notices: Not Supported 00:20:40.260 Normal NVM Subsystem Shutdown event: Not Supported 00:20:40.260 Zone Descriptor Change Notices: Not Supported 00:20:40.260 Discovery Log Change Notices: Not Supported 00:20:40.260 Controller Attributes 00:20:40.260 128-bit Host Identifier: Supported 00:20:40.260 Non-Operational Permissive Mode: Not Supported 00:20:40.260 NVM Sets: Not Supported 00:20:40.260 Read Recovery Levels: Not Supported 00:20:40.260 Endurance Groups: Not Supported 00:20:40.260 Predictable Latency Mode: Not Supported 00:20:40.260 Traffic Based Keep ALive: Not Supported 00:20:40.260 Namespace Granularity: Not Supported 00:20:40.260 SQ Associations: Not Supported 00:20:40.260 UUID List: Not Supported 00:20:40.260 Multi-Domain Subsystem: Not Supported 00:20:40.260 Fixed Capacity Management: Not Supported 00:20:40.260 Variable Capacity Management: Not Supported 00:20:40.260 Delete Endurance Group: Not Supported 00:20:40.260 Delete NVM Set: Not Supported 00:20:40.260 Extended LBA Formats Supported: Not Supported 00:20:40.260 Flexible Data Placement Supported: Not Supported 00:20:40.260 00:20:40.260 Controller Memory Buffer Support 00:20:40.260 ================================ 00:20:40.260 Supported: No 00:20:40.260 00:20:40.260 Persistent Memory Region Support 00:20:40.260 ================================ 00:20:40.260 Supported: No 00:20:40.260 00:20:40.260 Admin Command Set Attributes 00:20:40.260 ============================ 00:20:40.260 Security Send/Receive: Not Supported 00:20:40.260 Format NVM: Not Supported 00:20:40.260 Firmware Activate/Download: Not Supported 00:20:40.260 Namespace Management: Not Supported 00:20:40.260 Device Self-Test: Not Supported 00:20:40.260 Directives: Not Supported 00:20:40.260 NVMe-MI: Not Supported 00:20:40.260 Virtualization Management: Not Supported 00:20:40.260 Doorbell Buffer Config: Not Supported 00:20:40.260 Get LBA Status Capability: Not Supported 00:20:40.260 Command & Feature Lockdown Capability: Not Supported 00:20:40.260 Abort Command Limit: 4 00:20:40.260 Async Event Request Limit: 4 00:20:40.260 Number of Firmware Slots: N/A 00:20:40.260 Firmware Slot 1 Read-Only: N/A 00:20:40.260 Firmware Activation Without Reset: N/A 00:20:40.260 Multiple Update Detection Support: N/A 00:20:40.260 Firmware Update Granularity: No Information Provided 00:20:40.260 Per-Namespace SMART Log: No 00:20:40.260 Asymmetric Namespace Access Log Page: Not Supported 00:20:40.260 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:40.260 Command Effects Log Page: Supported 00:20:40.260 Get Log Page Extended Data: Supported 00:20:40.260 Telemetry Log Pages: Not Supported 00:20:40.260 Persistent Event Log Pages: Not Supported 00:20:40.260 Supported Log Pages Log Page: May Support 00:20:40.260 Commands Supported & Effects Log Page: Not Supported 00:20:40.260 Feature Identifiers & Effects Log Page:May Support 00:20:40.260 NVMe-MI Commands & Effects Log Page: May Support 00:20:40.260 Data Area 4 for Telemetry Log: Not Supported 00:20:40.260 Error Log Page Entries Supported: 128 00:20:40.260 Keep Alive: Supported 00:20:40.260 Keep Alive Granularity: 10000 ms 00:20:40.260 00:20:40.260 NVM Command Set Attributes 00:20:40.260 ========================== 00:20:40.260 Submission Queue Entry Size 00:20:40.260 Max: 64 00:20:40.260 Min: 64 00:20:40.260 Completion Queue Entry Size 00:20:40.260 Max: 16 00:20:40.260 Min: 16 00:20:40.260 Number of Namespaces: 32 00:20:40.260 Compare Command: Supported 00:20:40.260 Write Uncorrectable Command: Not Supported 00:20:40.260 Dataset Management Command: Supported 00:20:40.260 Write Zeroes Command: Supported 00:20:40.260 Set Features Save Field: Not Supported 00:20:40.260 Reservations: Supported 00:20:40.260 Timestamp: Not Supported 00:20:40.260 Copy: Supported 00:20:40.260 Volatile Write Cache: Present 00:20:40.260 Atomic Write Unit (Normal): 1 00:20:40.260 Atomic Write Unit (PFail): 1 00:20:40.260 Atomic Compare & Write Unit: 1 00:20:40.260 Fused Compare & Write: Supported 00:20:40.260 Scatter-Gather List 00:20:40.260 SGL Command Set: Supported 00:20:40.260 SGL Keyed: Supported 00:20:40.260 SGL Bit Bucket Descriptor: Not Supported 00:20:40.260 SGL Metadata Pointer: Not Supported 00:20:40.260 Oversized SGL: Not Supported 00:20:40.260 SGL Metadata Address: Not Supported 00:20:40.260 SGL Offset: Supported 00:20:40.260 Transport SGL Data Block: Not Supported 00:20:40.260 Replay Protected Memory Block: Not Supported 00:20:40.260 00:20:40.260 Firmware Slot Information 00:20:40.260 ========================= 00:20:40.260 Active slot: 1 00:20:40.260 Slot 1 Firmware Revision: 24.09 00:20:40.260 00:20:40.260 00:20:40.260 Commands Supported and Effects 00:20:40.260 ============================== 00:20:40.260 Admin Commands 00:20:40.260 -------------- 00:20:40.260 Get Log Page (02h): Supported 00:20:40.260 Identify (06h): Supported 00:20:40.260 Abort (08h): Supported 00:20:40.260 Set Features (09h): Supported 00:20:40.260 Get Features (0Ah): Supported 00:20:40.260 Asynchronous Event Request (0Ch): Supported 00:20:40.260 Keep Alive (18h): Supported 00:20:40.260 I/O Commands 00:20:40.260 ------------ 00:20:40.260 Flush (00h): Supported LBA-Change 00:20:40.260 Write (01h): Supported LBA-Change 00:20:40.260 Read (02h): Supported 00:20:40.260 Compare (05h): Supported 00:20:40.260 Write Zeroes (08h): Supported LBA-Change 00:20:40.260 Dataset Management (09h): Supported LBA-Change 00:20:40.260 Copy (19h): Supported LBA-Change 00:20:40.260 00:20:40.260 Error Log 00:20:40.260 ========= 00:20:40.260 00:20:40.260 Arbitration 00:20:40.260 =========== 00:20:40.260 Arbitration Burst: 1 00:20:40.260 00:20:40.260 Power Management 00:20:40.260 ================ 00:20:40.260 Number of Power States: 1 00:20:40.260 Current Power State: Power State #0 00:20:40.260 Power State #0: 00:20:40.260 Max Power: 0.00 W 00:20:40.260 Non-Operational State: Operational 00:20:40.260 Entry Latency: Not Reported 00:20:40.260 Exit Latency: Not Reported 00:20:40.260 Relative Read Throughput: 0 00:20:40.260 Relative Read Latency: 0 00:20:40.260 Relative Write Throughput: 0 00:20:40.260 Relative Write Latency: 0 00:20:40.260 Idle Power: Not Reported 00:20:40.260 Active Power: Not Reported 00:20:40.260 Non-Operational Permissive Mode: Not Supported 00:20:40.260 00:20:40.260 Health Information 00:20:40.260 ================== 00:20:40.260 Critical Warnings: 00:20:40.260 Available Spare Space: OK 00:20:40.260 Temperature: OK 00:20:40.260 Device Reliability: OK 00:20:40.260 Read Only: No 00:20:40.260 Volatile Memory Backup: OK 00:20:40.261 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:40.261 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:40.261 Available Spare: 0% 00:20:40.261 Available Spare Threshold: 0% 00:20:40.261 Life Percentage Used:[2024-07-15 19:15:20.553916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.553929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.553940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.553963] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714e40, cid 7, qid 0 00:20:40.261 [2024-07-15 19:15:20.554128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.554141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.554148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714e40) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.554216] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:40.261 [2024-07-15 19:15:20.554235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7143c0) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.554245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.261 [2024-07-15 19:15:20.554257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714540) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.554281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.261 [2024-07-15 19:15:20.554289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7146c0) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.554296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.261 [2024-07-15 19:15:20.554304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.554311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.261 [2024-07-15 19:15:20.554323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.554346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.554369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.554520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.554532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.554539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.554556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.554580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.554605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.554754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.554768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.554775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.554789] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:40.261 [2024-07-15 19:15:20.554797] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:40.261 [2024-07-15 19:15:20.554813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.554828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.554838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.554873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.555025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.555041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.555048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.555076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.555104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.555125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.555279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.555294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.555301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.555324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.555351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.555371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.555537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.555550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.555557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.555579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.555604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.555624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.555757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.555769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.555776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.555798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.555813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.555823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.555842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.556011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.556025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.556033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.556039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.556056] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.556069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.556077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.261 [2024-07-15 19:15:20.556088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.261 [2024-07-15 19:15:20.556109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.261 [2024-07-15 19:15:20.556261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.261 [2024-07-15 19:15:20.556274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.261 [2024-07-15 19:15:20.556280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.556287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.261 [2024-07-15 19:15:20.556303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.261 [2024-07-15 19:15:20.556312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.556328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.556348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.556481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.556493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.556500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.556521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.556547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.556566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.556701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.556716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.556723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.556745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.556771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.556790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.556945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.556960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.556967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.556974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.556990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.557021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.557042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.557200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.557216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.557223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.557246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.557272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.557292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.557429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.557444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.557451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.557473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.557499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.557519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.557650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.557662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.557669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.557691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.557706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.557717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.557736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.561889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.561906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.561913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.561920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.561938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.561947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.561954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6b4540) 00:20:40.262 [2024-07-15 19:15:20.561968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.262 [2024-07-15 19:15:20.561991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x714840, cid 3, qid 0 00:20:40.262 [2024-07-15 19:15:20.562150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:40.262 [2024-07-15 19:15:20.562166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:40.262 [2024-07-15 19:15:20.562188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:40.262 [2024-07-15 19:15:20.562194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x714840) on tqpair=0x6b4540 00:20:40.262 [2024-07-15 19:15:20.562208] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:40.262 0% 00:20:40.262 Data Units Read: 0 00:20:40.262 Data Units Written: 0 00:20:40.262 Host Read Commands: 0 00:20:40.262 Host Write Commands: 0 00:20:40.262 Controller Busy Time: 0 minutes 00:20:40.262 Power Cycles: 0 00:20:40.262 Power On Hours: 0 hours 00:20:40.262 Unsafe Shutdowns: 0 00:20:40.262 Unrecoverable Media Errors: 0 00:20:40.262 Lifetime Error Log Entries: 0 00:20:40.262 Warning Temperature Time: 0 minutes 00:20:40.262 Critical Temperature Time: 0 minutes 00:20:40.262 00:20:40.262 Number of Queues 00:20:40.262 ================ 00:20:40.262 Number of I/O Submission Queues: 127 00:20:40.262 Number of I/O Completion Queues: 127 00:20:40.262 00:20:40.262 Active Namespaces 00:20:40.262 ================= 00:20:40.262 Namespace ID:1 00:20:40.262 Error Recovery Timeout: Unlimited 00:20:40.262 Command Set Identifier: NVM (00h) 00:20:40.262 Deallocate: Supported 00:20:40.262 Deallocated/Unwritten Error: Not Supported 00:20:40.262 Deallocated Read Value: Unknown 00:20:40.262 Deallocate in Write Zeroes: Not Supported 00:20:40.262 Deallocated Guard Field: 0xFFFF 00:20:40.262 Flush: Supported 00:20:40.262 Reservation: Supported 00:20:40.262 Namespace Sharing Capabilities: Multiple Controllers 00:20:40.262 Size (in LBAs): 131072 (0GiB) 00:20:40.262 Capacity (in LBAs): 131072 (0GiB) 00:20:40.262 Utilization (in LBAs): 131072 (0GiB) 00:20:40.262 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:40.262 EUI64: ABCDEF0123456789 00:20:40.262 UUID: 326d49ce-3318-472f-b05a-27b841ec4d8b 00:20:40.262 Thin Provisioning: Not Supported 00:20:40.262 Per-NS Atomic Units: Yes 00:20:40.262 Atomic Boundary Size (Normal): 0 00:20:40.262 Atomic Boundary Size (PFail): 0 00:20:40.262 Atomic Boundary Offset: 0 00:20:40.262 Maximum Single Source Range Length: 65535 00:20:40.262 Maximum Copy Length: 65535 00:20:40.262 Maximum Source Range Count: 1 00:20:40.262 NGUID/EUI64 Never Reused: No 00:20:40.262 Namespace Write Protected: No 00:20:40.262 Number of LBA Formats: 1 00:20:40.262 Current LBA Format: LBA Format #00 00:20:40.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:40.262 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.262 rmmod nvme_tcp 00:20:40.262 rmmod nvme_fabrics 00:20:40.262 rmmod nvme_keyring 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3360623 ']' 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3360623 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3360623 ']' 00:20:40.262 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3360623 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3360623 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3360623' 00:20:40.263 killing process with pid 3360623 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3360623 00:20:40.263 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3360623 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.830 19:15:20 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.850 19:15:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.850 00:20:42.850 real 0m5.362s 00:20:42.850 user 0m4.489s 00:20:42.850 sys 0m1.789s 00:20:42.850 19:15:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:42.850 19:15:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:42.850 ************************************ 00:20:42.850 END TEST nvmf_identify 00:20:42.850 ************************************ 00:20:42.850 19:15:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:42.850 19:15:23 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:42.850 19:15:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:42.850 19:15:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:42.850 19:15:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:42.850 ************************************ 00:20:42.851 START TEST nvmf_perf 00:20:42.851 ************************************ 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:42.851 * Looking for test storage... 00:20:42.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.851 19:15:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:44.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:44.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:44.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:44.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.758 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:20:44.759 00:20:44.759 --- 10.0.0.2 ping statistics --- 00:20:44.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.759 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:20:44.759 00:20:44.759 --- 10.0.0.1 ping statistics --- 00:20:44.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.759 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.759 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3362705 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3362705 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3362705 ']' 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.018 19:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:45.018 [2024-07-15 19:15:25.258672] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:20:45.018 [2024-07-15 19:15:25.258776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.018 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.018 [2024-07-15 19:15:25.326442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.018 [2024-07-15 19:15:25.443219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.018 [2024-07-15 19:15:25.443280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.018 [2024-07-15 19:15:25.443304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.018 [2024-07-15 19:15:25.443319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.018 [2024-07-15 19:15:25.443330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.018 [2024-07-15 19:15:25.443415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.018 [2024-07-15 19:15:25.443482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.018 [2024-07-15 19:15:25.443578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.018 [2024-07-15 19:15:25.443581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:45.955 19:15:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:49.234 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:49.234 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:49.234 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:49.234 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:49.492 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:49.492 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:49.492 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:49.492 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:49.492 19:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.750 [2024-07-15 19:15:30.086573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.750 19:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.006 19:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:50.006 19:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:50.263 19:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:50.263 19:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:50.521 19:15:30 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.797 [2024-07-15 19:15:31.090200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.797 19:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:51.054 19:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:51.055 19:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:51.055 19:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:51.055 19:15:31 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:52.427 Initializing NVMe Controllers 00:20:52.427 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:52.427 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:52.427 Initialization complete. Launching workers. 00:20:52.427 ======================================================== 00:20:52.427 Latency(us) 00:20:52.427 Device Information : IOPS MiB/s Average min max 00:20:52.427 PCIE (0000:88:00.0) NSID 1 from core 0: 84944.44 331.81 376.12 21.30 5304.55 00:20:52.427 ======================================================== 00:20:52.427 Total : 84944.44 331.81 376.12 21.30 5304.55 00:20:52.427 00:20:52.427 19:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.427 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.812 Initializing NVMe Controllers 00:20:53.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:53.812 Initialization complete. Launching workers. 00:20:53.812 ======================================================== 00:20:53.812 Latency(us) 00:20:53.812 Device Information : IOPS MiB/s Average min max 00:20:53.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 116.00 0.45 8961.62 197.56 45091.43 00:20:53.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.00 0.23 17430.97 6983.96 47909.15 00:20:53.812 ======================================================== 00:20:53.812 Total : 175.00 0.68 11817.00 197.56 47909.15 00:20:53.812 00:20:53.812 19:15:33 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:53.812 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.185 Initializing NVMe Controllers 00:20:55.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:55.185 Initialization complete. Launching workers. 00:20:55.185 ======================================================== 00:20:55.185 Latency(us) 00:20:55.185 Device Information : IOPS MiB/s Average min max 00:20:55.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8424.83 32.91 3805.00 583.88 10797.01 00:20:55.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3869.65 15.12 8304.32 4354.66 17665.74 00:20:55.185 ======================================================== 00:20:55.185 Total : 12294.48 48.03 5221.14 583.88 17665.74 00:20:55.185 00:20:55.185 19:15:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:55.185 19:15:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:55.185 19:15:35 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.185 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.719 Initializing NVMe Controllers 00:20:57.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.719 Controller IO queue size 128, less than required. 00:20:57.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.719 Controller IO queue size 128, less than required. 00:20:57.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:57.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:57.719 Initialization complete. Launching workers. 00:20:57.719 ======================================================== 00:20:57.719 Latency(us) 00:20:57.719 Device Information : IOPS MiB/s Average min max 00:20:57.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 827.34 206.83 160999.64 91179.24 237843.54 00:20:57.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.39 141.85 227744.03 69766.04 380038.52 00:20:57.719 ======================================================== 00:20:57.719 Total : 1394.72 348.68 188151.93 69766.04 380038.52 00:20:57.719 00:20:57.719 19:15:37 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:57.719 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.719 No valid NVMe controllers or AIO or URING devices found 00:20:57.719 Initializing NVMe Controllers 00:20:57.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.719 Controller IO queue size 128, less than required. 00:20:57.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.719 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:57.719 Controller IO queue size 128, less than required. 00:20:57.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.719 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:57.719 WARNING: Some requested NVMe devices were skipped 00:20:57.719 19:15:38 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:57.719 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.251 Initializing NVMe Controllers 00:21:00.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.251 Controller IO queue size 128, less than required. 00:21:00.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.251 Controller IO queue size 128, less than required. 00:21:00.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:00.251 Initialization complete. Launching workers. 00:21:00.251 00:21:00.251 ==================== 00:21:00.251 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:00.251 TCP transport: 00:21:00.251 polls: 30180 00:21:00.251 idle_polls: 9817 00:21:00.251 sock_completions: 20363 00:21:00.251 nvme_completions: 3991 00:21:00.251 submitted_requests: 6024 00:21:00.251 queued_requests: 1 00:21:00.251 00:21:00.251 ==================== 00:21:00.251 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:00.251 TCP transport: 00:21:00.251 polls: 34974 00:21:00.251 idle_polls: 14311 00:21:00.251 sock_completions: 20663 00:21:00.251 nvme_completions: 3283 00:21:00.251 submitted_requests: 4962 00:21:00.251 queued_requests: 1 00:21:00.251 ======================================================== 00:21:00.251 Latency(us) 00:21:00.251 Device Information : IOPS MiB/s Average min max 00:21:00.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 997.48 249.37 131819.44 69838.31 189639.11 00:21:00.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 820.48 205.12 159753.96 67863.68 264348.82 00:21:00.251 ======================================================== 00:21:00.251 Total : 1817.96 454.49 144426.85 67863.68 264348.82 00:21:00.251 00:21:00.251 19:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:00.251 19:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.521 rmmod nvme_tcp 00:21:00.521 rmmod nvme_fabrics 00:21:00.521 rmmod nvme_keyring 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3362705 ']' 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3362705 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3362705 ']' 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3362705 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3362705 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:00.521 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3362705' 00:21:00.521 killing process with pid 3362705 00:21:00.522 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3362705 00:21:00.522 19:15:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3362705 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.432 19:15:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.342 19:15:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:04.342 00:21:04.342 real 0m21.604s 00:21:04.342 user 1m8.225s 00:21:04.342 sys 0m4.663s 00:21:04.342 19:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:04.342 19:15:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:04.342 ************************************ 00:21:04.342 END TEST nvmf_perf 00:21:04.342 ************************************ 00:21:04.342 19:15:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:04.342 19:15:44 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:04.342 19:15:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:04.342 19:15:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.342 19:15:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:04.342 ************************************ 00:21:04.342 START TEST nvmf_fio_host 00:21:04.342 ************************************ 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:04.342 * Looking for test storage... 00:21:04.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.342 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.343 19:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.602 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:04.602 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:04.602 19:15:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:04.602 19:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:06.507 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:06.507 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:06.507 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:06.507 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:06.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:21:06.507 00:21:06.507 --- 10.0.0.2 ping statistics --- 00:21:06.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.507 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:21:06.507 00:21:06.507 --- 10.0.0.1 ping statistics --- 00:21:06.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.507 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3366664 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3366664 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3366664 ']' 00:21:06.507 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.508 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.508 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.508 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.508 19:15:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.508 [2024-07-15 19:15:46.905648] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:06.508 [2024-07-15 19:15:46.905730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.768 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.768 [2024-07-15 19:15:46.974670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.768 [2024-07-15 19:15:47.091728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.768 [2024-07-15 19:15:47.091791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.768 [2024-07-15 19:15:47.091807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.768 [2024-07-15 19:15:47.091820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.768 [2024-07-15 19:15:47.091831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.768 [2024-07-15 19:15:47.091902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.768 [2024-07-15 19:15:47.091934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.768 [2024-07-15 19:15:47.092052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.768 [2024-07-15 19:15:47.092056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.703 19:15:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.703 19:15:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:07.703 19:15:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:07.703 [2024-07-15 19:15:48.068597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.703 19:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:07.703 19:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.703 19:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.703 19:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:07.961 Malloc1 00:21:07.961 19:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.220 19:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:08.478 19:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.736 [2024-07-15 19:15:49.112481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.736 19:15:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:08.995 19:15:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.254 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:09.254 fio-3.35 00:21:09.254 Starting 1 thread 00:21:09.254 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.785 00:21:11.785 test: (groupid=0, jobs=1): err= 0: pid=3367151: Mon Jul 15 19:15:52 2024 00:21:11.785 read: IOPS=9138, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2006msec) 00:21:11.785 slat (nsec): min=1846, max=153329, avg=2589.55, stdev=1851.59 00:21:11.785 clat (usec): min=3371, max=13107, avg=7729.29, stdev=567.11 00:21:11.785 lat (usec): min=3402, max=13110, avg=7731.87, stdev=566.99 00:21:11.785 clat percentiles (usec): 00:21:11.785 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:21:11.785 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:21:11.785 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:21:11.785 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11207], 99.95th=[11994], 00:21:11.785 | 99.99th=[13042] 00:21:11.785 bw ( KiB/s): min=35208, max=37152, per=99.91%, avg=36520.00, stdev=886.45, samples=4 00:21:11.785 iops : min= 8802, max= 9288, avg=9130.00, stdev=221.61, samples=4 00:21:11.785 write: IOPS=9147, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2006msec); 0 zone resets 00:21:11.785 slat (nsec): min=1980, max=131282, avg=2714.24, stdev=1427.06 00:21:11.785 clat (usec): min=1390, max=11870, avg=6182.02, stdev=506.31 00:21:11.785 lat (usec): min=1399, max=11873, avg=6184.73, stdev=506.26 00:21:11.785 clat percentiles (usec): 00:21:11.785 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:21:11.785 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:21:11.785 | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:21:11.785 | 99.00th=[ 7242], 99.50th=[ 7439], 99.90th=[10421], 99.95th=[11076], 00:21:11.785 | 99.99th=[11338] 00:21:11.785 bw ( KiB/s): min=35952, max=36992, per=99.99%, avg=36588.00, stdev=472.00, samples=4 00:21:11.785 iops : min= 8988, max= 9248, avg=9147.00, stdev=118.00, samples=4 00:21:11.785 lat (msec) : 2=0.01%, 4=0.11%, 10=99.73%, 20=0.15% 00:21:11.785 cpu : usr=56.06%, sys=36.96%, ctx=61, majf=0, minf=41 00:21:11.785 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:11.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:11.785 issued rwts: total=18331,18350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.785 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:11.785 00:21:11.785 Run status group 0 (all jobs): 00:21:11.785 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2006-2006msec 00:21:11.785 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2006-2006msec 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:11.785 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:11.786 19:15:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:12.047 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:12.047 fio-3.35 00:21:12.047 Starting 1 thread 00:21:12.047 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.579 00:21:14.579 test: (groupid=0, jobs=1): err= 0: pid=3367486: Mon Jul 15 19:15:54 2024 00:21:14.579 read: IOPS=8110, BW=127MiB/s (133MB/s)(254MiB/2006msec) 00:21:14.579 slat (usec): min=2, max=107, avg= 3.81, stdev= 1.82 00:21:14.579 clat (usec): min=3448, max=19132, avg=9472.67, stdev=2498.61 00:21:14.579 lat (usec): min=3452, max=19136, avg=9476.48, stdev=2498.69 00:21:14.579 clat percentiles (usec): 00:21:14.579 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7439], 00:21:14.579 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:21:14.579 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12780], 95.00th=[14222], 00:21:14.579 | 99.00th=[16450], 99.50th=[16909], 99.90th=[18482], 99.95th=[18744], 00:21:14.579 | 99.99th=[19006] 00:21:14.579 bw ( KiB/s): min=61408, max=75104, per=51.82%, avg=67240.00, stdev=6070.13, samples=4 00:21:14.579 iops : min= 3838, max= 4694, avg=4202.50, stdev=379.38, samples=4 00:21:14.579 write: IOPS=4774, BW=74.6MiB/s (78.2MB/s)(137MiB/1839msec); 0 zone resets 00:21:14.579 slat (usec): min=30, max=224, avg=34.51, stdev= 6.63 00:21:14.579 clat (usec): min=3889, max=17751, avg=11177.82, stdev=1895.03 00:21:14.579 lat (usec): min=3920, max=17784, avg=11212.32, stdev=1895.99 00:21:14.579 clat percentiles (usec): 00:21:14.579 | 1.00th=[ 7373], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9634], 00:21:14.579 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:21:14.579 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13829], 95.00th=[14484], 00:21:14.579 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16909], 99.95th=[17433], 00:21:14.579 | 99.99th=[17695] 00:21:14.579 bw ( KiB/s): min=64544, max=78656, per=91.45%, avg=69856.00, stdev=6643.94, samples=4 00:21:14.579 iops : min= 4034, max= 4916, avg=4366.00, stdev=415.25, samples=4 00:21:14.579 lat (msec) : 4=0.05%, 10=51.16%, 20=48.78% 00:21:14.579 cpu : usr=74.56%, sys=21.45%, ctx=21, majf=0, minf=59 00:21:14.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:14.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:14.579 issued rwts: total=16269,8780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:14.579 00:21:14.579 Run status group 0 (all jobs): 00:21:14.579 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=254MiB (267MB), run=2006-2006msec 00:21:14.579 WRITE: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=137MiB (144MB), run=1839-1839msec 00:21:14.579 19:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.579 19:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:14.579 19:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:14.579 19:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:14.579 19:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.580 rmmod nvme_tcp 00:21:14.580 rmmod nvme_fabrics 00:21:14.580 rmmod nvme_keyring 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3366664 ']' 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3366664 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3366664 ']' 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3366664 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3366664 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3366664' 00:21:14.580 killing process with pid 3366664 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3366664 00:21:14.580 19:15:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3366664 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.836 19:15:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.374 19:15:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:17.374 00:21:17.374 real 0m12.544s 00:21:17.374 user 0m37.746s 00:21:17.374 sys 0m3.975s 00:21:17.374 19:15:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:17.374 19:15:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.374 ************************************ 00:21:17.374 END TEST nvmf_fio_host 00:21:17.374 ************************************ 00:21:17.374 19:15:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:17.374 19:15:57 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:17.374 19:15:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:17.374 19:15:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.374 19:15:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:17.374 ************************************ 00:21:17.374 START TEST nvmf_failover 00:21:17.374 ************************************ 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:17.374 * Looking for test storage... 00:21:17.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:17.374 19:15:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.313 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:19.314 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:19.314 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:19.314 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:19.314 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:19.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:21:19.314 00:21:19.314 --- 10.0.0.2 ping statistics --- 00:21:19.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.314 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:21:19.314 00:21:19.314 --- 10.0.0.1 ping statistics --- 00:21:19.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.314 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3369681 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3369681 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3369681 ']' 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.314 19:15:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:19.314 [2024-07-15 19:15:59.469838] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:19.314 [2024-07-15 19:15:59.469940] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.314 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.314 [2024-07-15 19:15:59.538042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:19.314 [2024-07-15 19:15:59.656058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.314 [2024-07-15 19:15:59.656107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.314 [2024-07-15 19:15:59.656121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.314 [2024-07-15 19:15:59.656133] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.314 [2024-07-15 19:15:59.656144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.314 [2024-07-15 19:15:59.656266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.314 [2024-07-15 19:15:59.656389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.314 [2024-07-15 19:15:59.656392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:20.250 [2024-07-15 19:16:00.657939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.250 19:16:00 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:20.819 Malloc0 00:21:20.819 19:16:00 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.819 19:16:01 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.076 19:16:01 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.335 [2024-07-15 19:16:01.696963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.335 19:16:01 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:21.592 [2024-07-15 19:16:01.949783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:21.592 19:16:01 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:21.851 [2024-07-15 19:16:02.194598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3370094 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3370094 /var/tmp/bdevperf.sock 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3370094 ']' 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.851 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:22.421 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.421 19:16:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:22.421 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.680 NVMe0n1 00:21:22.680 19:16:02 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:22.938 00:21:22.938 19:16:03 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3370226 00:21:22.938 19:16:03 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:22.938 19:16:03 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:24.315 19:16:04 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.315 [2024-07-15 19:16:04.530941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 [2024-07-15 19:16:04.531256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4070 is same with the state(5) to be set 00:21:24.315 19:16:04 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:27.601 19:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.601 00:21:27.601 19:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:27.859 [2024-07-15 19:16:08.119297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.859 [2024-07-15 19:16:08.119377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.859 [2024-07-15 19:16:08.119408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.859 [2024-07-15 19:16:08.119421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.859 [2024-07-15 19:16:08.119433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.859 [2024-07-15 19:16:08.119444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.859 [2024-07-15 19:16:08.119456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.859 [2024-07-15 19:16:08.119468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5640 is same with the state(5) to be set 00:21:27.860 19:16:08 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:31.144 19:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.144 [2024-07-15 19:16:11.371468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.144 19:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:32.081 19:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:32.337 19:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3370226 00:21:38.922 0 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3370094 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3370094 ']' 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3370094 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3370094 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3370094' 00:21:38.922 killing process with pid 3370094 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3370094 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3370094 00:21:38.922 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.922 [2024-07-15 19:16:02.259691] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:38.922 [2024-07-15 19:16:02.259782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370094 ] 00:21:38.922 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.922 [2024-07-15 19:16:02.320077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.922 [2024-07-15 19:16:02.432514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.922 Running I/O for 15 seconds... 00:21:38.922 [2024-07-15 19:16:04.531676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.922 [2024-07-15 19:16:04.531720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.922 [2024-07-15 19:16:04.531739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.922 [2024-07-15 19:16:04.531753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.922 [2024-07-15 19:16:04.531767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.922 [2024-07-15 19:16:04.531780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.922 [2024-07-15 19:16:04.531794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.922 [2024-07-15 19:16:04.531808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.922 [2024-07-15 19:16:04.531821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17590f0 is same with the state(5) to be set 00:21:38.923 [2024-07-15 19:16:04.531922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.923 [2024-07-15 19:16:04.531944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.531971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.531987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.532977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.532993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.923 [2024-07-15 19:16:04.533254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.923 [2024-07-15 19:16:04.533269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.924 [2024-07-15 19:16:04.533626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.924 [2024-07-15 19:16:04.533654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.924 [2024-07-15 19:16:04.533686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.924 [2024-07-15 19:16:04.533714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.924 [2024-07-15 19:16:04.533742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.924 [2024-07-15 19:16:04.533771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.924 [2024-07-15 19:16:04.533799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.533983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.533997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.924 [2024-07-15 19:16:04.534517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.924 [2024-07-15 19:16:04.534531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.534977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.534992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.535006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.535035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.535063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.535092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.535121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.535150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.925 [2024-07-15 19:16:04.535183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.925 [2024-07-15 19:16:04.535742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.925 [2024-07-15 19:16:04.535786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.925 [2024-07-15 19:16:04.535797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:21:38.925 [2024-07-15 19:16:04.535810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.925 [2024-07-15 19:16:04.535873] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x177f390 was disconnected and freed. reset controller. 00:21:38.926 [2024-07-15 19:16:04.535900] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:38.926 [2024-07-15 19:16:04.535916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.926 [2024-07-15 19:16:04.539152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.926 [2024-07-15 19:16:04.539187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17590f0 (9): Bad file descriptor 00:21:38.926 [2024-07-15 19:16:04.615586] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:38.926 [2024-07-15 19:16:08.119811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.119852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.119889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.119923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.119941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.119956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.119972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.119986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.926 [2024-07-15 19:16:08.120833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.926 [2024-07-15 19:16:08.120847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.120861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.120881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.120913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.120929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.120943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.120958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.120973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.120988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.121982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.121996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.122010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.122024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.122039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.122053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.122068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.122082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.122096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.122110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.122125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.927 [2024-07-15 19:16:08.122138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.927 [2024-07-15 19:16:08.122160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.928 [2024-07-15 19:16:08.122381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.928 [2024-07-15 19:16:08.122410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.928 [2024-07-15 19:16:08.122439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.928 [2024-07-15 19:16:08.122469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.122982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.122997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.928 [2024-07-15 19:16:08.123182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.928 [2024-07-15 19:16:08.123210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.928 [2024-07-15 19:16:08.123418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.928 [2024-07-15 19:16:08.123433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:08.123650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.929 [2024-07-15 19:16:08.123694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.929 [2024-07-15 19:16:08.123709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91488 len:8 PRP1 0x0 PRP2 0x0 00:21:38.929 [2024-07-15 19:16:08.123723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123788] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1923d80 was disconnected and freed. reset controller. 00:21:38.929 [2024-07-15 19:16:08.123806] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:38.929 [2024-07-15 19:16:08.123839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.929 [2024-07-15 19:16:08.123857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.929 [2024-07-15 19:16:08.123894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.929 [2024-07-15 19:16:08.123922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.929 [2024-07-15 19:16:08.123950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:08.123963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.929 [2024-07-15 19:16:08.127209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.929 [2024-07-15 19:16:08.127247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17590f0 (9): Bad file descriptor 00:21:38.929 [2024-07-15 19:16:08.293059] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:38.929 [2024-07-15 19:16:12.634575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.634979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.634993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.929 [2024-07-15 19:16:12.635433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.929 [2024-07-15 19:16:12.635448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.635973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.635988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.930 [2024-07-15 19:16:12.636675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.930 [2024-07-15 19:16:12.636691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.930 [2024-07-15 19:16:12.636709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.636979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.636995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.931 [2024-07-15 19:16:12.637838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.931 [2024-07-15 19:16:12.637853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.637867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.637888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.637904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.637919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.637933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.637948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.637963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.637978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.637991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.638020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.638049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.638085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.932 [2024-07-15 19:16:12.638114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54432 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54440 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54448 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54456 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54464 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54472 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54480 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54488 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54496 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54504 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54512 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:38.932 [2024-07-15 19:16:12.638691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:38.932 [2024-07-15 19:16:12.638702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54520 len:8 PRP1 0x0 PRP2 0x0 00:21:38.932 [2024-07-15 19:16:12.638714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638774] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1923b70 was disconnected and freed. reset controller. 00:21:38.932 [2024-07-15 19:16:12.638792] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:38.932 [2024-07-15 19:16:12.638826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.932 [2024-07-15 19:16:12.638844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.932 [2024-07-15 19:16:12.638873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.932 [2024-07-15 19:16:12.638908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:38.932 [2024-07-15 19:16:12.638936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.932 [2024-07-15 19:16:12.638949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.932 [2024-07-15 19:16:12.642197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.932 [2024-07-15 19:16:12.642235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17590f0 (9): Bad file descriptor 00:21:38.932 [2024-07-15 19:16:12.832371] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:38.932 00:21:38.932 Latency(us) 00:21:38.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.932 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:38.932 Verification LBA range: start 0x0 length 0x4000 00:21:38.932 NVMe0n1 : 15.01 8576.49 33.50 1118.63 0.00 13174.48 794.93 15437.37 00:21:38.932 =================================================================================================================== 00:21:38.932 Total : 8576.49 33.50 1118.63 0.00 13174.48 794.93 15437.37 00:21:38.932 Received shutdown signal, test time was about 15.000000 seconds 00:21:38.932 00:21:38.932 Latency(us) 00:21:38.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.932 =================================================================================================================== 00:21:38.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3371955 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3371955 /var/tmp/bdevperf.sock 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3371955 ']' 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.932 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.933 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.933 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.933 19:16:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:38.933 19:16:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.933 19:16:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:38.933 19:16:19 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:38.933 [2024-07-15 19:16:19.311656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:38.933 19:16:19 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:39.190 [2024-07-15 19:16:19.604534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:39.466 19:16:19 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.723 NVMe0n1 00:21:39.723 19:16:20 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.291 00:21:40.291 19:16:20 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.548 00:21:40.548 19:16:20 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.548 19:16:20 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:40.806 19:16:21 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.064 19:16:21 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:44.372 19:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.372 19:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:44.372 19:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3372741 00:21:44.372 19:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.372 19:16:24 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3372741 00:21:45.745 0 00:21:45.745 19:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.745 [2024-07-15 19:16:18.806891] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:45.745 [2024-07-15 19:16:18.806976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371955 ] 00:21:45.745 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.745 [2024-07-15 19:16:18.867098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.745 [2024-07-15 19:16:18.974137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.745 [2024-07-15 19:16:21.455689] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:45.745 [2024-07-15 19:16:21.455782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.745 [2024-07-15 19:16:21.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.745 [2024-07-15 19:16:21.455822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.745 [2024-07-15 19:16:21.455835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.745 [2024-07-15 19:16:21.455849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.745 [2024-07-15 19:16:21.455872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.745 [2024-07-15 19:16:21.455896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.745 [2024-07-15 19:16:21.455911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.745 [2024-07-15 19:16:21.455924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.745 [2024-07-15 19:16:21.455968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:45.745 [2024-07-15 19:16:21.456000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211d0f0 (9): Bad file descriptor 00:21:45.745 [2024-07-15 19:16:21.463017] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:45.745 Running I/O for 1 seconds... 00:21:45.745 00:21:45.745 Latency(us) 00:21:45.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.745 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.745 Verification LBA range: start 0x0 length 0x4000 00:21:45.745 NVMe0n1 : 1.01 8660.72 33.83 0.00 0.00 14685.89 2342.31 13981.01 00:21:45.745 =================================================================================================================== 00:21:45.745 Total : 8660.72 33.83 0.00 0.00 14685.89 2342.31 13981.01 00:21:45.745 19:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.745 19:16:25 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:45.745 19:16:26 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:46.003 19:16:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:46.003 19:16:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:46.297 19:16:26 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:46.555 19:16:26 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:49.838 19:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:49.838 19:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:49.838 19:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3371955 00:21:49.838 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3371955 ']' 00:21:49.838 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3371955 00:21:49.838 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:49.838 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.838 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3371955 00:21:50.096 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:50.096 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:50.096 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3371955' 00:21:50.096 killing process with pid 3371955 00:21:50.096 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3371955 00:21:50.096 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3371955 00:21:50.354 19:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:50.354 19:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.354 19:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:50.354 19:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:50.354 19:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:50.354 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.354 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.612 rmmod nvme_tcp 00:21:50.612 rmmod nvme_fabrics 00:21:50.612 rmmod nvme_keyring 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3369681 ']' 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3369681 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3369681 ']' 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3369681 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3369681 00:21:50.612 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:50.613 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:50.613 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3369681' 00:21:50.613 killing process with pid 3369681 00:21:50.613 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3369681 00:21:50.613 19:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3369681 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.871 19:16:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.779 19:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.779 00:21:52.779 real 0m35.909s 00:21:52.779 user 2m6.969s 00:21:52.779 sys 0m5.740s 00:21:52.779 19:16:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:52.779 19:16:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:52.779 ************************************ 00:21:52.779 END TEST nvmf_failover 00:21:52.779 ************************************ 00:21:53.036 19:16:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:53.036 19:16:33 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:53.036 19:16:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:53.036 19:16:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.036 19:16:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.036 ************************************ 00:21:53.036 START TEST nvmf_host_discovery 00:21:53.036 ************************************ 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:53.036 * Looking for test storage... 00:21:53.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.036 19:16:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:54.937 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:54.937 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:54.937 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:54.937 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.937 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:21:55.195 00:21:55.195 --- 10.0.0.2 ping statistics --- 00:21:55.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.195 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:21:55.195 00:21:55.195 --- 10.0.0.1 ping statistics --- 00:21:55.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.195 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3375344 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3375344 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3375344 ']' 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.195 19:16:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.195 [2024-07-15 19:16:35.512851] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:55.195 [2024-07-15 19:16:35.512970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.195 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.195 [2024-07-15 19:16:35.581376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.454 [2024-07-15 19:16:35.697265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.454 [2024-07-15 19:16:35.697315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.454 [2024-07-15 19:16:35.697332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.454 [2024-07-15 19:16:35.697345] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.454 [2024-07-15 19:16:35.697357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.454 [2024-07-15 19:16:35.697386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.065 [2024-07-15 19:16:36.489222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:56.065 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.066 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.324 [2024-07-15 19:16:36.497452] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.324 null0 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.324 null1 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3375498 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3375498 /tmp/host.sock 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3375498 ']' 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:56.324 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.324 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.324 [2024-07-15 19:16:36.570378] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:21:56.324 [2024-07-15 19:16:36.570459] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375498 ] 00:21:56.324 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.324 [2024-07-15 19:16:36.631553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.324 [2024-07-15 19:16:36.747874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:56.582 19:16:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 [2024-07-15 19:16:37.151163] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.840 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:57.099 19:16:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:57.666 [2024-07-15 19:16:37.928091] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:57.666 [2024-07-15 19:16:37.928130] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:57.666 [2024-07-15 19:16:37.928166] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:57.666 [2024-07-15 19:16:38.014449] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:57.924 [2024-07-15 19:16:38.200555] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:57.924 [2024-07-15 19:16:38.200581] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:57.924 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.182 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.183 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.183 [2024-07-15 19:16:38.611319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.183 [2024-07-15 19:16:38.611996] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:58.183 [2024-07-15 19:16:38.612047] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.443 [2024-07-15 19:16:38.741896] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:58.443 19:16:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:58.702 [2024-07-15 19:16:39.045304] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:58.702 [2024-07-15 19:16:39.045330] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:58.702 [2024-07-15 19:16:39.045341] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.639 [2024-07-15 19:16:39.831717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.639 [2024-07-15 19:16:39.831757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.639 [2024-07-15 19:16:39.831791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.639 [2024-07-15 19:16:39.831806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.639 [2024-07-15 19:16:39.831820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.639 [2024-07-15 19:16:39.831833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.639 [2024-07-15 19:16:39.831847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.639 [2024-07-15 19:16:39.831860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.639 [2024-07-15 19:16:39.831873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.639 [2024-07-15 19:16:39.831994] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:59.639 [2024-07-15 19:16:39.832025] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:59.639 [2024-07-15 19:16:39.841705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.639 [2024-07-15 19:16:39.851746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:59.639 [2024-07-15 19:16:39.852064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.639 [2024-07-15 19:16:39.852093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6cc00 with addr=10.0.0.2, port=4420 00:21:59.639 [2024-07-15 19:16:39.852110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.639 [2024-07-15 19:16:39.852133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.639 [2024-07-15 19:16:39.852175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:59.639 [2024-07-15 19:16:39.852193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:59.639 [2024-07-15 19:16:39.852209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:59.639 [2024-07-15 19:16:39.852238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.639 [2024-07-15 19:16:39.861835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:59.639 [2024-07-15 19:16:39.862060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.639 [2024-07-15 19:16:39.862088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6cc00 with addr=10.0.0.2, port=4420 00:21:59.639 [2024-07-15 19:16:39.862104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.639 [2024-07-15 19:16:39.862125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.639 [2024-07-15 19:16:39.862145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:59.639 [2024-07-15 19:16:39.862158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:59.639 [2024-07-15 19:16:39.862171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:59.639 [2024-07-15 19:16:39.862195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.639 [2024-07-15 19:16:39.871903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:59.639 [2024-07-15 19:16:39.872103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.639 [2024-07-15 19:16:39.872130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6cc00 with addr=10.0.0.2, port=4420 00:21:59.639 [2024-07-15 19:16:39.872146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.639 [2024-07-15 19:16:39.872167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.639 [2024-07-15 19:16:39.872187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:59.639 [2024-07-15 19:16:39.872200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:59.639 [2024-07-15 19:16:39.872213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:59.639 [2024-07-15 19:16:39.872231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.639 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:59.640 [2024-07-15 19:16:39.881987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:59.640 [2024-07-15 19:16:39.882213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.640 [2024-07-15 19:16:39.882251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6cc00 with addr=10.0.0.2, port=4420 00:21:59.640 [2024-07-15 19:16:39.882266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.640 [2024-07-15 19:16:39.882288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.640 [2024-07-15 19:16:39.882321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:59.640 [2024-07-15 19:16:39.882339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:59.640 [2024-07-15 19:16:39.882352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:59.640 [2024-07-15 19:16:39.882370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.640 [2024-07-15 19:16:39.892077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:59.640 [2024-07-15 19:16:39.892280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.640 [2024-07-15 19:16:39.892313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6cc00 with addr=10.0.0.2, port=4420 00:21:59.640 [2024-07-15 19:16:39.892330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.640 [2024-07-15 19:16:39.892352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.640 [2024-07-15 19:16:39.892372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:59.640 [2024-07-15 19:16:39.892386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:59.640 [2024-07-15 19:16:39.892399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:59.640 [2024-07-15 19:16:39.892418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.640 [2024-07-15 19:16:39.902149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:59.640 [2024-07-15 19:16:39.902430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.640 [2024-07-15 19:16:39.902457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6cc00 with addr=10.0.0.2, port=4420 00:21:59.640 [2024-07-15 19:16:39.902472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.640 [2024-07-15 19:16:39.902494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.640 [2024-07-15 19:16:39.902526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:59.640 [2024-07-15 19:16:39.902544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:59.640 [2024-07-15 19:16:39.902556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:59.640 [2024-07-15 19:16:39.902575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.640 [2024-07-15 19:16:39.912232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:59.640 [2024-07-15 19:16:39.912529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:59.640 [2024-07-15 19:16:39.912557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f6cc00 with addr=10.0.0.2, port=4420 00:21:59.640 [2024-07-15 19:16:39.912572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6cc00 is same with the state(5) to be set 00:21:59.640 [2024-07-15 19:16:39.912594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6cc00 (9): Bad file descriptor 00:21:59.640 [2024-07-15 19:16:39.912627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:59.640 [2024-07-15 19:16:39.912645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:59.640 [2024-07-15 19:16:39.912658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:59.640 [2024-07-15 19:16:39.912688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:59.640 [2024-07-15 19:16:39.918638] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:59.640 [2024-07-15 19:16:39.918665] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.640 19:16:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.640 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.899 19:16:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:00.834 [2024-07-15 19:16:41.206929] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:00.834 [2024-07-15 19:16:41.206957] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:00.834 [2024-07-15 19:16:41.206978] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:01.091 [2024-07-15 19:16:41.333403] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:01.091 [2024-07-15 19:16:41.441797] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:01.091 [2024-07-15 19:16:41.441839] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.091 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.091 request: 00:22:01.091 { 00:22:01.091 "name": "nvme", 00:22:01.091 "trtype": "tcp", 00:22:01.091 "traddr": "10.0.0.2", 00:22:01.091 "adrfam": "ipv4", 00:22:01.091 "trsvcid": "8009", 00:22:01.091 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:01.091 "wait_for_attach": true, 00:22:01.091 "method": "bdev_nvme_start_discovery", 00:22:01.091 "req_id": 1 00:22:01.091 } 00:22:01.091 Got JSON-RPC error response 00:22:01.091 response: 00:22:01.091 { 00:22:01.091 "code": -17, 00:22:01.091 "message": "File exists" 00:22:01.091 } 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.092 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.351 request: 00:22:01.351 { 00:22:01.351 "name": "nvme_second", 00:22:01.351 "trtype": "tcp", 00:22:01.351 "traddr": "10.0.0.2", 00:22:01.351 "adrfam": "ipv4", 00:22:01.351 "trsvcid": "8009", 00:22:01.351 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:01.351 "wait_for_attach": true, 00:22:01.351 "method": "bdev_nvme_start_discovery", 00:22:01.351 "req_id": 1 00:22:01.351 } 00:22:01.351 Got JSON-RPC error response 00:22:01.351 response: 00:22:01.351 { 00:22:01.351 "code": -17, 00:22:01.351 "message": "File exists" 00:22:01.351 } 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:01.351 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.352 19:16:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.287 [2024-07-15 19:16:42.650030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.287 [2024-07-15 19:16:42.650092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f87c90 with addr=10.0.0.2, port=8010 00:22:02.287 [2024-07-15 19:16:42.650123] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:02.287 [2024-07-15 19:16:42.650137] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:02.287 [2024-07-15 19:16:42.650166] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:03.223 [2024-07-15 19:16:43.652468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.223 [2024-07-15 19:16:43.652542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f87c90 with addr=10.0.0.2, port=8010 00:22:03.223 [2024-07-15 19:16:43.652575] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:03.223 [2024-07-15 19:16:43.652591] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:03.223 [2024-07-15 19:16:43.652605] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:04.603 [2024-07-15 19:16:44.654594] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:04.603 request: 00:22:04.603 { 00:22:04.603 "name": "nvme_second", 00:22:04.603 "trtype": "tcp", 00:22:04.603 "traddr": "10.0.0.2", 00:22:04.603 "adrfam": "ipv4", 00:22:04.603 "trsvcid": "8010", 00:22:04.603 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:04.603 "wait_for_attach": false, 00:22:04.603 "attach_timeout_ms": 3000, 00:22:04.603 "method": "bdev_nvme_start_discovery", 00:22:04.603 "req_id": 1 00:22:04.603 } 00:22:04.603 Got JSON-RPC error response 00:22:04.603 response: 00:22:04.603 { 00:22:04.603 "code": -110, 00:22:04.603 "message": "Connection timed out" 00:22:04.604 } 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3375498 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.604 rmmod nvme_tcp 00:22:04.604 rmmod nvme_fabrics 00:22:04.604 rmmod nvme_keyring 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3375344 ']' 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3375344 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3375344 ']' 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3375344 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3375344 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3375344' 00:22:04.604 killing process with pid 3375344 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3375344 00:22:04.604 19:16:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3375344 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.862 19:16:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.770 19:16:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:06.770 00:22:06.770 real 0m13.872s 00:22:06.770 user 0m20.009s 00:22:06.770 sys 0m2.847s 00:22:06.770 19:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.770 19:16:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.770 ************************************ 00:22:06.770 END TEST nvmf_host_discovery 00:22:06.770 ************************************ 00:22:06.770 19:16:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:06.770 19:16:47 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:06.770 19:16:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:06.770 19:16:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.770 19:16:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.770 ************************************ 00:22:06.770 START TEST nvmf_host_multipath_status 00:22:06.770 ************************************ 00:22:06.770 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:07.028 * Looking for test storage... 00:22:07.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:07.028 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.029 19:16:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:08.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.927 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:08.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:08.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:08.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:22:08.928 00:22:08.928 --- 10.0.0.2 ping statistics --- 00:22:08.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.928 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:22:08.928 00:22:08.928 --- 10.0.0.1 ping statistics --- 00:22:08.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.928 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3378530 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3378530 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3378530 ']' 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.928 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:08.928 [2024-07-15 19:16:49.286551] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:08.928 [2024-07-15 19:16:49.286622] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.928 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.928 [2024-07-15 19:16:49.352365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:09.187 [2024-07-15 19:16:49.469386] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.187 [2024-07-15 19:16:49.469461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.187 [2024-07-15 19:16:49.469478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.187 [2024-07-15 19:16:49.469491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.187 [2024-07-15 19:16:49.469502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.187 [2024-07-15 19:16:49.470904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.187 [2024-07-15 19:16:49.470916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3378530 00:22:09.187 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:09.444 [2024-07-15 19:16:49.824496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.444 19:16:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:09.702 Malloc0 00:22:09.702 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:09.960 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.218 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.475 [2024-07-15 19:16:50.834742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.475 19:16:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:10.765 [2024-07-15 19:16:51.075390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3378809 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3378809 /var/tmp/bdevperf.sock 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3378809 ']' 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.765 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:11.022 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.022 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:11.022 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:11.280 19:16:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:11.847 Nvme0n1 00:22:11.847 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:12.428 Nvme0n1 00:22:12.428 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:12.428 19:16:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:14.335 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:14.335 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:14.593 19:16:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:14.852 19:16:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:15.790 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:15.790 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:15.790 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.790 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.048 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.048 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:16.048 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.048 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.306 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.306 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.306 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.306 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:16.563 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.563 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:16.563 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.563 19:16:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:16.821 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.821 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:16.821 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.821 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.079 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.079 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:17.079 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.079 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.337 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.337 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:17.337 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:17.595 19:16:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:17.853 19:16:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.230 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.488 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.488 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.488 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.488 19:16:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:19.746 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.746 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:19.746 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.746 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.004 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.004 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:20.005 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.005 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.263 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.263 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.263 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.263 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.521 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.521 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:20.521 19:17:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:20.779 19:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:21.038 19:17:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:21.976 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:21.976 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:21.976 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.976 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:22.234 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.234 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:22.234 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.234 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:22.493 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.493 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:22.493 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.493 19:17:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:22.751 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.751 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:22.751 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.751 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.009 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.009 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.009 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.009 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.268 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.268 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:23.268 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.268 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:23.526 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.526 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:23.526 19:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:23.784 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:24.043 19:17:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:25.018 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:25.018 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:25.018 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.018 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.299 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.299 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:25.299 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.299 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.558 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.558 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.558 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.558 19:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.816 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.816 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.816 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.816 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.074 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.074 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.074 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.074 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.332 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.332 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:26.332 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.332 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.589 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.589 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:26.589 19:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:26.847 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:27.105 19:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:28.039 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:28.039 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:28.039 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.039 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.297 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.297 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:28.297 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.297 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.554 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.554 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.554 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.554 19:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.812 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.812 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.812 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.812 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.070 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.070 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:29.070 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.070 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.329 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:29.329 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:29.329 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.329 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.587 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:29.587 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:29.587 19:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:29.844 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:30.104 19:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:31.038 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:31.038 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:31.038 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.038 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.296 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.296 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:31.296 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.296 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.554 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.554 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.554 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.554 19:17:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.813 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.813 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.813 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.813 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.071 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.071 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:32.071 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.071 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.329 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.329 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:32.329 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.329 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.587 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.587 19:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:32.846 19:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:32.846 19:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:33.104 19:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:33.362 19:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:34.300 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:34.300 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:34.300 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.300 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.558 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.558 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:34.558 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.559 19:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.816 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.816 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.816 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.816 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:35.073 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.073 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:35.073 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.073 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:35.330 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.330 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:35.330 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.330 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:35.587 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.587 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:35.587 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.587 19:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.844 19:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.844 19:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:35.844 19:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:36.101 19:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:36.359 19:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:37.296 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:37.296 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:37.296 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.296 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.554 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.554 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:37.554 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.554 19:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.846 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.846 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.847 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.847 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:38.104 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.104 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:38.104 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.105 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:38.362 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.362 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:38.362 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.362 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.621 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.621 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.621 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.621 19:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.879 19:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.879 19:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:38.879 19:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:39.136 19:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:39.394 19:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:40.333 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:40.333 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:40.333 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.333 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.592 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.592 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:40.592 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.592 19:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:40.850 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.850 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:40.850 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.850 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:41.107 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.108 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:41.108 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.108 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.365 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.365 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.365 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.365 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.622 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.622 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:41.622 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.622 19:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:41.880 19:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.880 19:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:41.880 19:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:42.139 19:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:42.398 19:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:43.341 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:43.341 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:43.341 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.341 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:43.598 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.598 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:43.598 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.598 19:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.856 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:43.856 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.856 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.856 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:44.114 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.114 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:44.114 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.114 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.371 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.371 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:44.371 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.371 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.629 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.629 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:44.629 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.629 19:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3378809 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3378809 ']' 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3378809 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378809 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:44.889 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:44.890 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378809' 00:22:44.890 killing process with pid 3378809 00:22:44.890 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3378809 00:22:44.890 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3378809 00:22:45.151 Connection closed with partial response: 00:22:45.151 00:22:45.151 00:22:45.151 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3378809 00:22:45.151 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:45.151 [2024-07-15 19:16:51.133273] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:45.151 [2024-07-15 19:16:51.133361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378809 ] 00:22:45.151 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.151 [2024-07-15 19:16:51.193714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.151 [2024-07-15 19:16:51.301418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.151 Running I/O for 90 seconds... 00:22:45.151 [2024-07-15 19:17:07.094411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.151 [2024-07-15 19:17:07.094474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.094975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.094998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.095014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.095067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.095105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.095142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.095179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.095218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.095256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.095294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.095346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.095385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.095438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.095480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.095504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.095521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.152 [2024-07-15 19:17:07.097214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:45.152 [2024-07-15 19:17:07.097926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.152 [2024-07-15 19:17:07.097944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.097971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.097987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.153 [2024-07-15 19:17:07.098598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.098970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.098997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.099958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.153 [2024-07-15 19:17:07.099989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.153 [2024-07-15 19:17:07.100006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:07.100037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:07.100054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.686933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.686981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.686998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.687036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.687074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.687387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.687798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.687849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.687900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.687962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.687978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.688204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.154 [2024-07-15 19:17:22.688242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.154 [2024-07-15 19:17:22.688466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:45.154 [2024-07-15 19:17:22.688488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.688975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.688997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.155 [2024-07-15 19:17:22.689333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:45.155 [2024-07-15 19:17:22.689392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.155 [2024-07-15 19:17:22.689408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:45.155 Received shutdown signal, test time was about 32.476839 seconds 00:22:45.155 00:22:45.155 Latency(us) 00:22:45.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.155 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:45.155 Verification LBA range: start 0x0 length 0x4000 00:22:45.155 Nvme0n1 : 32.48 7815.67 30.53 0.00 0.00 16350.11 232.11 4026531.84 00:22:45.155 =================================================================================================================== 00:22:45.155 Total : 7815.67 30.53 0.00 0.00 16350.11 232.11 4026531.84 00:22:45.155 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.414 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.414 rmmod nvme_tcp 00:22:45.414 rmmod nvme_fabrics 00:22:45.414 rmmod nvme_keyring 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3378530 ']' 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3378530 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3378530 ']' 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3378530 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378530 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378530' 00:22:45.673 killing process with pid 3378530 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3378530 00:22:45.673 19:17:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3378530 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.934 19:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.836 19:17:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.836 00:22:47.836 real 0m41.048s 00:22:47.836 user 2m4.169s 00:22:47.836 sys 0m10.425s 00:22:47.836 19:17:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:47.836 19:17:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:47.836 ************************************ 00:22:47.836 END TEST nvmf_host_multipath_status 00:22:47.836 ************************************ 00:22:47.836 19:17:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:47.836 19:17:28 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:47.836 19:17:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:47.836 19:17:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.836 19:17:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:48.094 ************************************ 00:22:48.094 START TEST nvmf_discovery_remove_ifc 00:22:48.094 ************************************ 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:48.094 * Looking for test storage... 00:22:48.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.094 19:17:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:49.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.999 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:50.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:50.000 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:50.000 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:22:50.000 00:22:50.000 --- 10.0.0.2 ping statistics --- 00:22:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.000 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:22:50.000 00:22:50.000 --- 10.0.0.1 ping statistics --- 00:22:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.000 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:50.000 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3385016 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3385016 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3385016 ']' 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.260 19:17:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.260 [2024-07-15 19:17:30.484131] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:50.260 [2024-07-15 19:17:30.484219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.260 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.260 [2024-07-15 19:17:30.551993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.260 [2024-07-15 19:17:30.666852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.260 [2024-07-15 19:17:30.666941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.261 [2024-07-15 19:17:30.666960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.261 [2024-07-15 19:17:30.666974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.261 [2024-07-15 19:17:30.666985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.261 [2024-07-15 19:17:30.667017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.196 [2024-07-15 19:17:31.491013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.196 [2024-07-15 19:17:31.499147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:51.196 null0 00:22:51.196 [2024-07-15 19:17:31.531110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3385170 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3385170 /tmp/host.sock 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3385170 ']' 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.196 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:51.196 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:51.197 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.197 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.197 [2024-07-15 19:17:31.597576] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:22:51.197 [2024-07-15 19:17:31.597662] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385170 ] 00:22:51.197 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.455 [2024-07-15 19:17:31.656558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.455 [2024-07-15 19:17:31.764377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.455 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.715 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.715 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:51.715 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.715 19:17:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.681 [2024-07-15 19:17:32.960081] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:52.681 [2024-07-15 19:17:32.960117] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:52.681 [2024-07-15 19:17:32.960140] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:52.681 [2024-07-15 19:17:33.046464] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:52.939 [2024-07-15 19:17:33.273907] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:52.939 [2024-07-15 19:17:33.273989] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:52.939 [2024-07-15 19:17:33.274032] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:52.939 [2024-07-15 19:17:33.274065] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:52.939 [2024-07-15 19:17:33.274100] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.939 [2024-07-15 19:17:33.277609] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10e2870 was disconnected and freed. delete nvme_qpair. 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:52.939 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:53.197 19:17:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:54.130 19:17:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.066 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.066 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.066 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.066 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.066 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.066 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.067 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.067 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.324 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:55.324 19:17:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:56.261 19:17:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:57.196 19:17:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:58.578 19:17:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.578 [2024-07-15 19:17:38.714894] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:58.578 [2024-07-15 19:17:38.714992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.578 [2024-07-15 19:17:38.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.578 [2024-07-15 19:17:38.715042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.578 [2024-07-15 19:17:38.715055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.578 [2024-07-15 19:17:38.715067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.578 [2024-07-15 19:17:38.715080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.578 [2024-07-15 19:17:38.715093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.578 [2024-07-15 19:17:38.715105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.578 [2024-07-15 19:17:38.715118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.578 [2024-07-15 19:17:38.715130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.578 [2024-07-15 19:17:38.715142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9300 is same with the state(5) to be set 00:22:58.578 [2024-07-15 19:17:38.724905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a9300 (9): Bad file descriptor 00:22:58.578 [2024-07-15 19:17:38.734949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.515 [2024-07-15 19:17:39.794939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:59.515 [2024-07-15 19:17:39.795028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10a9300 with addr=10.0.0.2, port=4420 00:22:59.515 [2024-07-15 19:17:39.795054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a9300 is same with the state(5) to be set 00:22:59.515 [2024-07-15 19:17:39.795104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a9300 (9): Bad file descriptor 00:22:59.515 [2024-07-15 19:17:39.795593] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.515 [2024-07-15 19:17:39.795624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:59.515 [2024-07-15 19:17:39.795639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:59.515 [2024-07-15 19:17:39.795654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:59.515 [2024-07-15 19:17:39.795688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.515 [2024-07-15 19:17:39.795704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:59.515 19:17:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.456 [2024-07-15 19:17:40.798208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:00.456 [2024-07-15 19:17:40.798265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:00.456 [2024-07-15 19:17:40.798283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:00.456 [2024-07-15 19:17:40.798298] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:00.456 [2024-07-15 19:17:40.798327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.456 [2024-07-15 19:17:40.798370] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:00.456 [2024-07-15 19:17:40.798415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.457 [2024-07-15 19:17:40.798438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.457 [2024-07-15 19:17:40.798459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.457 [2024-07-15 19:17:40.798474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.457 [2024-07-15 19:17:40.798489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.457 [2024-07-15 19:17:40.798503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.457 [2024-07-15 19:17:40.798517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.457 [2024-07-15 19:17:40.798531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.457 [2024-07-15 19:17:40.798547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.457 [2024-07-15 19:17:40.798561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.457 [2024-07-15 19:17:40.798574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:00.457 [2024-07-15 19:17:40.798752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a8780 (9): Bad file descriptor 00:23:00.457 [2024-07-15 19:17:40.799768] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:00.457 [2024-07-15 19:17:40.799793] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.457 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.716 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:00.716 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.716 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:00.717 19:17:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:01.657 19:17:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:02.593 [2024-07-15 19:17:42.856111] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:02.593 [2024-07-15 19:17:42.856139] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:02.593 [2024-07-15 19:17:42.856180] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:02.593 [2024-07-15 19:17:42.943472] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.593 19:17:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.850 19:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:02.850 19:17:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:02.850 [2024-07-15 19:17:43.046592] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:02.850 [2024-07-15 19:17:43.046652] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:02.850 [2024-07-15 19:17:43.046691] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:02.850 [2024-07-15 19:17:43.046716] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:02.850 [2024-07-15 19:17:43.046730] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:02.850 [2024-07-15 19:17:43.053854] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10b0110 was disconnected and freed. delete nvme_qpair. 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3385170 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3385170 ']' 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3385170 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3385170 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3385170' 00:23:03.786 killing process with pid 3385170 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3385170 00:23:03.786 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3385170 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.044 rmmod nvme_tcp 00:23:04.044 rmmod nvme_fabrics 00:23:04.044 rmmod nvme_keyring 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3385016 ']' 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3385016 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3385016 ']' 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3385016 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3385016 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3385016' 00:23:04.044 killing process with pid 3385016 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3385016 00:23:04.044 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3385016 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.615 19:17:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.522 19:17:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.522 00:23:06.522 real 0m18.501s 00:23:06.522 user 0m26.835s 00:23:06.522 sys 0m3.016s 00:23:06.522 19:17:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:06.522 19:17:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.522 ************************************ 00:23:06.522 END TEST nvmf_discovery_remove_ifc 00:23:06.522 ************************************ 00:23:06.522 19:17:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:06.522 19:17:46 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:06.522 19:17:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:06.522 19:17:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.522 19:17:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.522 ************************************ 00:23:06.522 START TEST nvmf_identify_kernel_target 00:23:06.522 ************************************ 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:06.522 * Looking for test storage... 00:23:06.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.522 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.523 19:17:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:08.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:08.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:08.462 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:08.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:08.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.463 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:08.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:23:08.722 00:23:08.722 --- 10.0.0.2 ping statistics --- 00:23:08.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.722 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:23:08.722 00:23:08.722 --- 10.0.0.1 ping statistics --- 00:23:08.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.722 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:08.722 19:17:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:09.659 Waiting for block devices as requested 00:23:09.917 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:09.917 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:09.917 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:10.176 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:10.176 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:10.176 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:10.434 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:10.434 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:10.434 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:10.434 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:10.434 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:10.694 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:10.694 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:10.694 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:10.694 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:10.953 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:10.953 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:10.953 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:10.953 No valid GPT data, bailing 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:11.213 00:23:11.213 Discovery Log Number of Records 2, Generation counter 2 00:23:11.213 =====Discovery Log Entry 0====== 00:23:11.213 trtype: tcp 00:23:11.213 adrfam: ipv4 00:23:11.213 subtype: current discovery subsystem 00:23:11.213 treq: not specified, sq flow control disable supported 00:23:11.213 portid: 1 00:23:11.213 trsvcid: 4420 00:23:11.213 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:11.213 traddr: 10.0.0.1 00:23:11.213 eflags: none 00:23:11.213 sectype: none 00:23:11.213 =====Discovery Log Entry 1====== 00:23:11.213 trtype: tcp 00:23:11.213 adrfam: ipv4 00:23:11.213 subtype: nvme subsystem 00:23:11.213 treq: not specified, sq flow control disable supported 00:23:11.213 portid: 1 00:23:11.213 trsvcid: 4420 00:23:11.213 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:11.213 traddr: 10.0.0.1 00:23:11.213 eflags: none 00:23:11.213 sectype: none 00:23:11.213 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:11.213 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:11.214 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.214 ===================================================== 00:23:11.214 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:11.214 ===================================================== 00:23:11.214 Controller Capabilities/Features 00:23:11.214 ================================ 00:23:11.214 Vendor ID: 0000 00:23:11.214 Subsystem Vendor ID: 0000 00:23:11.214 Serial Number: 8433b10fde36d198c4fd 00:23:11.214 Model Number: Linux 00:23:11.214 Firmware Version: 6.7.0-68 00:23:11.214 Recommended Arb Burst: 0 00:23:11.214 IEEE OUI Identifier: 00 00 00 00:23:11.214 Multi-path I/O 00:23:11.214 May have multiple subsystem ports: No 00:23:11.214 May have multiple controllers: No 00:23:11.214 Associated with SR-IOV VF: No 00:23:11.214 Max Data Transfer Size: Unlimited 00:23:11.214 Max Number of Namespaces: 0 00:23:11.214 Max Number of I/O Queues: 1024 00:23:11.214 NVMe Specification Version (VS): 1.3 00:23:11.214 NVMe Specification Version (Identify): 1.3 00:23:11.214 Maximum Queue Entries: 1024 00:23:11.214 Contiguous Queues Required: No 00:23:11.214 Arbitration Mechanisms Supported 00:23:11.214 Weighted Round Robin: Not Supported 00:23:11.214 Vendor Specific: Not Supported 00:23:11.214 Reset Timeout: 7500 ms 00:23:11.214 Doorbell Stride: 4 bytes 00:23:11.214 NVM Subsystem Reset: Not Supported 00:23:11.214 Command Sets Supported 00:23:11.214 NVM Command Set: Supported 00:23:11.214 Boot Partition: Not Supported 00:23:11.214 Memory Page Size Minimum: 4096 bytes 00:23:11.214 Memory Page Size Maximum: 4096 bytes 00:23:11.214 Persistent Memory Region: Not Supported 00:23:11.214 Optional Asynchronous Events Supported 00:23:11.214 Namespace Attribute Notices: Not Supported 00:23:11.214 Firmware Activation Notices: Not Supported 00:23:11.214 ANA Change Notices: Not Supported 00:23:11.214 PLE Aggregate Log Change Notices: Not Supported 00:23:11.214 LBA Status Info Alert Notices: Not Supported 00:23:11.214 EGE Aggregate Log Change Notices: Not Supported 00:23:11.214 Normal NVM Subsystem Shutdown event: Not Supported 00:23:11.214 Zone Descriptor Change Notices: Not Supported 00:23:11.214 Discovery Log Change Notices: Supported 00:23:11.214 Controller Attributes 00:23:11.214 128-bit Host Identifier: Not Supported 00:23:11.214 Non-Operational Permissive Mode: Not Supported 00:23:11.214 NVM Sets: Not Supported 00:23:11.214 Read Recovery Levels: Not Supported 00:23:11.214 Endurance Groups: Not Supported 00:23:11.214 Predictable Latency Mode: Not Supported 00:23:11.214 Traffic Based Keep ALive: Not Supported 00:23:11.214 Namespace Granularity: Not Supported 00:23:11.214 SQ Associations: Not Supported 00:23:11.214 UUID List: Not Supported 00:23:11.214 Multi-Domain Subsystem: Not Supported 00:23:11.214 Fixed Capacity Management: Not Supported 00:23:11.214 Variable Capacity Management: Not Supported 00:23:11.214 Delete Endurance Group: Not Supported 00:23:11.214 Delete NVM Set: Not Supported 00:23:11.214 Extended LBA Formats Supported: Not Supported 00:23:11.214 Flexible Data Placement Supported: Not Supported 00:23:11.214 00:23:11.214 Controller Memory Buffer Support 00:23:11.214 ================================ 00:23:11.214 Supported: No 00:23:11.214 00:23:11.214 Persistent Memory Region Support 00:23:11.214 ================================ 00:23:11.214 Supported: No 00:23:11.214 00:23:11.214 Admin Command Set Attributes 00:23:11.214 ============================ 00:23:11.214 Security Send/Receive: Not Supported 00:23:11.214 Format NVM: Not Supported 00:23:11.214 Firmware Activate/Download: Not Supported 00:23:11.214 Namespace Management: Not Supported 00:23:11.214 Device Self-Test: Not Supported 00:23:11.214 Directives: Not Supported 00:23:11.214 NVMe-MI: Not Supported 00:23:11.214 Virtualization Management: Not Supported 00:23:11.214 Doorbell Buffer Config: Not Supported 00:23:11.214 Get LBA Status Capability: Not Supported 00:23:11.214 Command & Feature Lockdown Capability: Not Supported 00:23:11.214 Abort Command Limit: 1 00:23:11.214 Async Event Request Limit: 1 00:23:11.214 Number of Firmware Slots: N/A 00:23:11.214 Firmware Slot 1 Read-Only: N/A 00:23:11.214 Firmware Activation Without Reset: N/A 00:23:11.214 Multiple Update Detection Support: N/A 00:23:11.214 Firmware Update Granularity: No Information Provided 00:23:11.214 Per-Namespace SMART Log: No 00:23:11.214 Asymmetric Namespace Access Log Page: Not Supported 00:23:11.214 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:11.214 Command Effects Log Page: Not Supported 00:23:11.214 Get Log Page Extended Data: Supported 00:23:11.214 Telemetry Log Pages: Not Supported 00:23:11.214 Persistent Event Log Pages: Not Supported 00:23:11.214 Supported Log Pages Log Page: May Support 00:23:11.214 Commands Supported & Effects Log Page: Not Supported 00:23:11.214 Feature Identifiers & Effects Log Page:May Support 00:23:11.214 NVMe-MI Commands & Effects Log Page: May Support 00:23:11.214 Data Area 4 for Telemetry Log: Not Supported 00:23:11.214 Error Log Page Entries Supported: 1 00:23:11.214 Keep Alive: Not Supported 00:23:11.214 00:23:11.214 NVM Command Set Attributes 00:23:11.214 ========================== 00:23:11.214 Submission Queue Entry Size 00:23:11.214 Max: 1 00:23:11.214 Min: 1 00:23:11.214 Completion Queue Entry Size 00:23:11.214 Max: 1 00:23:11.214 Min: 1 00:23:11.214 Number of Namespaces: 0 00:23:11.214 Compare Command: Not Supported 00:23:11.214 Write Uncorrectable Command: Not Supported 00:23:11.214 Dataset Management Command: Not Supported 00:23:11.214 Write Zeroes Command: Not Supported 00:23:11.214 Set Features Save Field: Not Supported 00:23:11.214 Reservations: Not Supported 00:23:11.214 Timestamp: Not Supported 00:23:11.214 Copy: Not Supported 00:23:11.214 Volatile Write Cache: Not Present 00:23:11.214 Atomic Write Unit (Normal): 1 00:23:11.214 Atomic Write Unit (PFail): 1 00:23:11.214 Atomic Compare & Write Unit: 1 00:23:11.214 Fused Compare & Write: Not Supported 00:23:11.214 Scatter-Gather List 00:23:11.214 SGL Command Set: Supported 00:23:11.214 SGL Keyed: Not Supported 00:23:11.214 SGL Bit Bucket Descriptor: Not Supported 00:23:11.214 SGL Metadata Pointer: Not Supported 00:23:11.214 Oversized SGL: Not Supported 00:23:11.214 SGL Metadata Address: Not Supported 00:23:11.214 SGL Offset: Supported 00:23:11.214 Transport SGL Data Block: Not Supported 00:23:11.214 Replay Protected Memory Block: Not Supported 00:23:11.214 00:23:11.214 Firmware Slot Information 00:23:11.214 ========================= 00:23:11.214 Active slot: 0 00:23:11.214 00:23:11.214 00:23:11.214 Error Log 00:23:11.214 ========= 00:23:11.214 00:23:11.214 Active Namespaces 00:23:11.214 ================= 00:23:11.214 Discovery Log Page 00:23:11.214 ================== 00:23:11.214 Generation Counter: 2 00:23:11.214 Number of Records: 2 00:23:11.214 Record Format: 0 00:23:11.214 00:23:11.214 Discovery Log Entry 0 00:23:11.214 ---------------------- 00:23:11.214 Transport Type: 3 (TCP) 00:23:11.214 Address Family: 1 (IPv4) 00:23:11.214 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:11.214 Entry Flags: 00:23:11.214 Duplicate Returned Information: 0 00:23:11.214 Explicit Persistent Connection Support for Discovery: 0 00:23:11.215 Transport Requirements: 00:23:11.215 Secure Channel: Not Specified 00:23:11.215 Port ID: 1 (0x0001) 00:23:11.215 Controller ID: 65535 (0xffff) 00:23:11.215 Admin Max SQ Size: 32 00:23:11.215 Transport Service Identifier: 4420 00:23:11.215 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:11.215 Transport Address: 10.0.0.1 00:23:11.215 Discovery Log Entry 1 00:23:11.215 ---------------------- 00:23:11.215 Transport Type: 3 (TCP) 00:23:11.215 Address Family: 1 (IPv4) 00:23:11.215 Subsystem Type: 2 (NVM Subsystem) 00:23:11.215 Entry Flags: 00:23:11.215 Duplicate Returned Information: 0 00:23:11.215 Explicit Persistent Connection Support for Discovery: 0 00:23:11.215 Transport Requirements: 00:23:11.215 Secure Channel: Not Specified 00:23:11.215 Port ID: 1 (0x0001) 00:23:11.215 Controller ID: 65535 (0xffff) 00:23:11.215 Admin Max SQ Size: 32 00:23:11.215 Transport Service Identifier: 4420 00:23:11.215 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:11.215 Transport Address: 10.0.0.1 00:23:11.215 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:11.215 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.476 get_feature(0x01) failed 00:23:11.476 get_feature(0x02) failed 00:23:11.476 get_feature(0x04) failed 00:23:11.476 ===================================================== 00:23:11.476 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:11.476 ===================================================== 00:23:11.476 Controller Capabilities/Features 00:23:11.476 ================================ 00:23:11.476 Vendor ID: 0000 00:23:11.476 Subsystem Vendor ID: 0000 00:23:11.476 Serial Number: 947b6b2a756916460bb4 00:23:11.476 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:11.476 Firmware Version: 6.7.0-68 00:23:11.476 Recommended Arb Burst: 6 00:23:11.476 IEEE OUI Identifier: 00 00 00 00:23:11.476 Multi-path I/O 00:23:11.476 May have multiple subsystem ports: Yes 00:23:11.476 May have multiple controllers: Yes 00:23:11.476 Associated with SR-IOV VF: No 00:23:11.476 Max Data Transfer Size: Unlimited 00:23:11.476 Max Number of Namespaces: 1024 00:23:11.476 Max Number of I/O Queues: 128 00:23:11.476 NVMe Specification Version (VS): 1.3 00:23:11.476 NVMe Specification Version (Identify): 1.3 00:23:11.476 Maximum Queue Entries: 1024 00:23:11.476 Contiguous Queues Required: No 00:23:11.476 Arbitration Mechanisms Supported 00:23:11.476 Weighted Round Robin: Not Supported 00:23:11.476 Vendor Specific: Not Supported 00:23:11.476 Reset Timeout: 7500 ms 00:23:11.476 Doorbell Stride: 4 bytes 00:23:11.476 NVM Subsystem Reset: Not Supported 00:23:11.476 Command Sets Supported 00:23:11.476 NVM Command Set: Supported 00:23:11.476 Boot Partition: Not Supported 00:23:11.476 Memory Page Size Minimum: 4096 bytes 00:23:11.476 Memory Page Size Maximum: 4096 bytes 00:23:11.476 Persistent Memory Region: Not Supported 00:23:11.476 Optional Asynchronous Events Supported 00:23:11.476 Namespace Attribute Notices: Supported 00:23:11.476 Firmware Activation Notices: Not Supported 00:23:11.476 ANA Change Notices: Supported 00:23:11.476 PLE Aggregate Log Change Notices: Not Supported 00:23:11.476 LBA Status Info Alert Notices: Not Supported 00:23:11.476 EGE Aggregate Log Change Notices: Not Supported 00:23:11.476 Normal NVM Subsystem Shutdown event: Not Supported 00:23:11.476 Zone Descriptor Change Notices: Not Supported 00:23:11.476 Discovery Log Change Notices: Not Supported 00:23:11.476 Controller Attributes 00:23:11.476 128-bit Host Identifier: Supported 00:23:11.476 Non-Operational Permissive Mode: Not Supported 00:23:11.476 NVM Sets: Not Supported 00:23:11.476 Read Recovery Levels: Not Supported 00:23:11.476 Endurance Groups: Not Supported 00:23:11.476 Predictable Latency Mode: Not Supported 00:23:11.476 Traffic Based Keep ALive: Supported 00:23:11.477 Namespace Granularity: Not Supported 00:23:11.477 SQ Associations: Not Supported 00:23:11.477 UUID List: Not Supported 00:23:11.477 Multi-Domain Subsystem: Not Supported 00:23:11.477 Fixed Capacity Management: Not Supported 00:23:11.477 Variable Capacity Management: Not Supported 00:23:11.477 Delete Endurance Group: Not Supported 00:23:11.477 Delete NVM Set: Not Supported 00:23:11.477 Extended LBA Formats Supported: Not Supported 00:23:11.477 Flexible Data Placement Supported: Not Supported 00:23:11.477 00:23:11.477 Controller Memory Buffer Support 00:23:11.477 ================================ 00:23:11.477 Supported: No 00:23:11.477 00:23:11.477 Persistent Memory Region Support 00:23:11.477 ================================ 00:23:11.477 Supported: No 00:23:11.477 00:23:11.477 Admin Command Set Attributes 00:23:11.477 ============================ 00:23:11.477 Security Send/Receive: Not Supported 00:23:11.477 Format NVM: Not Supported 00:23:11.477 Firmware Activate/Download: Not Supported 00:23:11.477 Namespace Management: Not Supported 00:23:11.477 Device Self-Test: Not Supported 00:23:11.477 Directives: Not Supported 00:23:11.477 NVMe-MI: Not Supported 00:23:11.477 Virtualization Management: Not Supported 00:23:11.477 Doorbell Buffer Config: Not Supported 00:23:11.477 Get LBA Status Capability: Not Supported 00:23:11.477 Command & Feature Lockdown Capability: Not Supported 00:23:11.477 Abort Command Limit: 4 00:23:11.477 Async Event Request Limit: 4 00:23:11.477 Number of Firmware Slots: N/A 00:23:11.477 Firmware Slot 1 Read-Only: N/A 00:23:11.477 Firmware Activation Without Reset: N/A 00:23:11.477 Multiple Update Detection Support: N/A 00:23:11.477 Firmware Update Granularity: No Information Provided 00:23:11.477 Per-Namespace SMART Log: Yes 00:23:11.477 Asymmetric Namespace Access Log Page: Supported 00:23:11.477 ANA Transition Time : 10 sec 00:23:11.477 00:23:11.477 Asymmetric Namespace Access Capabilities 00:23:11.477 ANA Optimized State : Supported 00:23:11.477 ANA Non-Optimized State : Supported 00:23:11.477 ANA Inaccessible State : Supported 00:23:11.477 ANA Persistent Loss State : Supported 00:23:11.477 ANA Change State : Supported 00:23:11.477 ANAGRPID is not changed : No 00:23:11.477 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:11.477 00:23:11.477 ANA Group Identifier Maximum : 128 00:23:11.477 Number of ANA Group Identifiers : 128 00:23:11.477 Max Number of Allowed Namespaces : 1024 00:23:11.477 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:11.477 Command Effects Log Page: Supported 00:23:11.477 Get Log Page Extended Data: Supported 00:23:11.477 Telemetry Log Pages: Not Supported 00:23:11.477 Persistent Event Log Pages: Not Supported 00:23:11.477 Supported Log Pages Log Page: May Support 00:23:11.477 Commands Supported & Effects Log Page: Not Supported 00:23:11.477 Feature Identifiers & Effects Log Page:May Support 00:23:11.477 NVMe-MI Commands & Effects Log Page: May Support 00:23:11.477 Data Area 4 for Telemetry Log: Not Supported 00:23:11.477 Error Log Page Entries Supported: 128 00:23:11.477 Keep Alive: Supported 00:23:11.477 Keep Alive Granularity: 1000 ms 00:23:11.477 00:23:11.477 NVM Command Set Attributes 00:23:11.477 ========================== 00:23:11.477 Submission Queue Entry Size 00:23:11.477 Max: 64 00:23:11.477 Min: 64 00:23:11.477 Completion Queue Entry Size 00:23:11.477 Max: 16 00:23:11.477 Min: 16 00:23:11.477 Number of Namespaces: 1024 00:23:11.477 Compare Command: Not Supported 00:23:11.477 Write Uncorrectable Command: Not Supported 00:23:11.477 Dataset Management Command: Supported 00:23:11.477 Write Zeroes Command: Supported 00:23:11.477 Set Features Save Field: Not Supported 00:23:11.477 Reservations: Not Supported 00:23:11.477 Timestamp: Not Supported 00:23:11.477 Copy: Not Supported 00:23:11.477 Volatile Write Cache: Present 00:23:11.477 Atomic Write Unit (Normal): 1 00:23:11.477 Atomic Write Unit (PFail): 1 00:23:11.477 Atomic Compare & Write Unit: 1 00:23:11.477 Fused Compare & Write: Not Supported 00:23:11.477 Scatter-Gather List 00:23:11.477 SGL Command Set: Supported 00:23:11.477 SGL Keyed: Not Supported 00:23:11.477 SGL Bit Bucket Descriptor: Not Supported 00:23:11.477 SGL Metadata Pointer: Not Supported 00:23:11.477 Oversized SGL: Not Supported 00:23:11.477 SGL Metadata Address: Not Supported 00:23:11.477 SGL Offset: Supported 00:23:11.477 Transport SGL Data Block: Not Supported 00:23:11.477 Replay Protected Memory Block: Not Supported 00:23:11.477 00:23:11.477 Firmware Slot Information 00:23:11.477 ========================= 00:23:11.477 Active slot: 0 00:23:11.477 00:23:11.477 Asymmetric Namespace Access 00:23:11.477 =========================== 00:23:11.477 Change Count : 0 00:23:11.477 Number of ANA Group Descriptors : 1 00:23:11.477 ANA Group Descriptor : 0 00:23:11.477 ANA Group ID : 1 00:23:11.477 Number of NSID Values : 1 00:23:11.477 Change Count : 0 00:23:11.477 ANA State : 1 00:23:11.477 Namespace Identifier : 1 00:23:11.477 00:23:11.477 Commands Supported and Effects 00:23:11.477 ============================== 00:23:11.477 Admin Commands 00:23:11.477 -------------- 00:23:11.477 Get Log Page (02h): Supported 00:23:11.477 Identify (06h): Supported 00:23:11.477 Abort (08h): Supported 00:23:11.477 Set Features (09h): Supported 00:23:11.477 Get Features (0Ah): Supported 00:23:11.477 Asynchronous Event Request (0Ch): Supported 00:23:11.477 Keep Alive (18h): Supported 00:23:11.477 I/O Commands 00:23:11.477 ------------ 00:23:11.477 Flush (00h): Supported 00:23:11.477 Write (01h): Supported LBA-Change 00:23:11.477 Read (02h): Supported 00:23:11.477 Write Zeroes (08h): Supported LBA-Change 00:23:11.477 Dataset Management (09h): Supported 00:23:11.477 00:23:11.477 Error Log 00:23:11.477 ========= 00:23:11.477 Entry: 0 00:23:11.477 Error Count: 0x3 00:23:11.477 Submission Queue Id: 0x0 00:23:11.477 Command Id: 0x5 00:23:11.477 Phase Bit: 0 00:23:11.477 Status Code: 0x2 00:23:11.477 Status Code Type: 0x0 00:23:11.477 Do Not Retry: 1 00:23:11.477 Error Location: 0x28 00:23:11.477 LBA: 0x0 00:23:11.477 Namespace: 0x0 00:23:11.477 Vendor Log Page: 0x0 00:23:11.477 ----------- 00:23:11.477 Entry: 1 00:23:11.477 Error Count: 0x2 00:23:11.477 Submission Queue Id: 0x0 00:23:11.477 Command Id: 0x5 00:23:11.477 Phase Bit: 0 00:23:11.477 Status Code: 0x2 00:23:11.477 Status Code Type: 0x0 00:23:11.477 Do Not Retry: 1 00:23:11.477 Error Location: 0x28 00:23:11.477 LBA: 0x0 00:23:11.477 Namespace: 0x0 00:23:11.477 Vendor Log Page: 0x0 00:23:11.477 ----------- 00:23:11.477 Entry: 2 00:23:11.477 Error Count: 0x1 00:23:11.477 Submission Queue Id: 0x0 00:23:11.477 Command Id: 0x4 00:23:11.477 Phase Bit: 0 00:23:11.477 Status Code: 0x2 00:23:11.477 Status Code Type: 0x0 00:23:11.477 Do Not Retry: 1 00:23:11.477 Error Location: 0x28 00:23:11.477 LBA: 0x0 00:23:11.477 Namespace: 0x0 00:23:11.477 Vendor Log Page: 0x0 00:23:11.477 00:23:11.477 Number of Queues 00:23:11.477 ================ 00:23:11.477 Number of I/O Submission Queues: 128 00:23:11.477 Number of I/O Completion Queues: 128 00:23:11.477 00:23:11.477 ZNS Specific Controller Data 00:23:11.477 ============================ 00:23:11.477 Zone Append Size Limit: 0 00:23:11.477 00:23:11.477 00:23:11.477 Active Namespaces 00:23:11.477 ================= 00:23:11.477 get_feature(0x05) failed 00:23:11.477 Namespace ID:1 00:23:11.477 Command Set Identifier: NVM (00h) 00:23:11.477 Deallocate: Supported 00:23:11.477 Deallocated/Unwritten Error: Not Supported 00:23:11.477 Deallocated Read Value: Unknown 00:23:11.477 Deallocate in Write Zeroes: Not Supported 00:23:11.477 Deallocated Guard Field: 0xFFFF 00:23:11.477 Flush: Supported 00:23:11.477 Reservation: Not Supported 00:23:11.477 Namespace Sharing Capabilities: Multiple Controllers 00:23:11.477 Size (in LBAs): 1953525168 (931GiB) 00:23:11.477 Capacity (in LBAs): 1953525168 (931GiB) 00:23:11.477 Utilization (in LBAs): 1953525168 (931GiB) 00:23:11.477 UUID: f26b9e4a-fff6-4b16-81ec-0fa204eb49dc 00:23:11.477 Thin Provisioning: Not Supported 00:23:11.477 Per-NS Atomic Units: Yes 00:23:11.477 Atomic Boundary Size (Normal): 0 00:23:11.477 Atomic Boundary Size (PFail): 0 00:23:11.477 Atomic Boundary Offset: 0 00:23:11.477 NGUID/EUI64 Never Reused: No 00:23:11.477 ANA group ID: 1 00:23:11.477 Namespace Write Protected: No 00:23:11.477 Number of LBA Formats: 1 00:23:11.477 Current LBA Format: LBA Format #00 00:23:11.477 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:11.477 00:23:11.477 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:11.477 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:11.477 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:11.477 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.477 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.478 rmmod nvme_tcp 00:23:11.478 rmmod nvme_fabrics 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.478 19:17:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:13.387 19:17:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:14.761 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:14.761 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:14.761 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:14.761 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:14.761 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:14.761 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:14.761 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:14.761 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:14.761 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:15.698 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:15.698 00:23:15.698 real 0m9.211s 00:23:15.698 user 0m1.965s 00:23:15.698 sys 0m3.343s 00:23:15.698 19:17:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.698 19:17:56 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.698 ************************************ 00:23:15.698 END TEST nvmf_identify_kernel_target 00:23:15.698 ************************************ 00:23:15.698 19:17:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:15.698 19:17:56 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:15.698 19:17:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:15.698 19:17:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.698 19:17:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.698 ************************************ 00:23:15.698 START TEST nvmf_auth_host 00:23:15.698 ************************************ 00:23:15.698 19:17:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:15.955 * Looking for test storage... 00:23:15.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.955 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.955 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:15.955 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.955 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.955 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.955 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.955 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.956 19:17:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.896 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:17.897 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:17.897 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:17.897 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:17.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:23:17.897 00:23:17.897 --- 10.0.0.2 ping statistics --- 00:23:17.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.897 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:23:17.897 00:23:17.897 --- 10.0.0.1 ping statistics --- 00:23:17.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.897 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3392161 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3392161 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3392161 ']' 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.897 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=82fb67212d69f19f730ba4d05377ea28 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bJH 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 82fb67212d69f19f730ba4d05377ea28 0 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 82fb67212d69f19f730ba4d05377ea28 0 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=82fb67212d69f19f730ba4d05377ea28 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bJH 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bJH 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.bJH 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=becb9beafbcad4b8ad719d343b8786ca67c197d78505276be1795e86a1c01b35 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gCH 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key becb9beafbcad4b8ad719d343b8786ca67c197d78505276be1795e86a1c01b35 3 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 becb9beafbcad4b8ad719d343b8786ca67c197d78505276be1795e86a1c01b35 3 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=becb9beafbcad4b8ad719d343b8786ca67c197d78505276be1795e86a1c01b35 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:18.155 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gCH 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gCH 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gCH 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33664e958cc559b128c74bf4a986a4ee07824d605fea9d1b 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NVn 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33664e958cc559b128c74bf4a986a4ee07824d605fea9d1b 0 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33664e958cc559b128c74bf4a986a4ee07824d605fea9d1b 0 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33664e958cc559b128c74bf4a986a4ee07824d605fea9d1b 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NVn 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NVn 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NVn 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9f36611127189d985bcc4270dd19e6d2a8c7b767f5e78033 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kfe 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9f36611127189d985bcc4270dd19e6d2a8c7b767f5e78033 2 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9f36611127189d985bcc4270dd19e6d2a8c7b767f5e78033 2 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9f36611127189d985bcc4270dd19e6d2a8c7b767f5e78033 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kfe 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kfe 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.kfe 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:18.413 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=48b720c175254650c1b287b3114555df 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.uBH 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 48b720c175254650c1b287b3114555df 1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 48b720c175254650c1b287b3114555df 1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=48b720c175254650c1b287b3114555df 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.uBH 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.uBH 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.uBH 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f1e0daecb3dba4ebf45337c7989cba74 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lhe 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f1e0daecb3dba4ebf45337c7989cba74 1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f1e0daecb3dba4ebf45337c7989cba74 1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f1e0daecb3dba4ebf45337c7989cba74 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lhe 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lhe 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.lhe 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4fa317145ce1a900f5b55562b47f333c4c4ad1e57af1e85a 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FLf 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4fa317145ce1a900f5b55562b47f333c4c4ad1e57af1e85a 2 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4fa317145ce1a900f5b55562b47f333c4c4ad1e57af1e85a 2 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4fa317145ce1a900f5b55562b47f333c4c4ad1e57af1e85a 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:18.414 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FLf 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FLf 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.FLf 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=743bea7243e95e37acdbc6f0f6a984c0 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jkd 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 743bea7243e95e37acdbc6f0f6a984c0 0 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 743bea7243e95e37acdbc6f0f6a984c0 0 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=743bea7243e95e37acdbc6f0f6a984c0 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jkd 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jkd 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jkd 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cb81a84cda4b58805f5cfb66fdef09106ac6bc3772e9e267aa9deacc0c03cad7 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DhD 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cb81a84cda4b58805f5cfb66fdef09106ac6bc3772e9e267aa9deacc0c03cad7 3 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cb81a84cda4b58805f5cfb66fdef09106ac6bc3772e9e267aa9deacc0c03cad7 3 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cb81a84cda4b58805f5cfb66fdef09106ac6bc3772e9e267aa9deacc0c03cad7 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DhD 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DhD 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DhD 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3392161 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3392161 ']' 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.672 19:17:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bJH 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gCH ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gCH 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NVn 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.kfe ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kfe 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.uBH 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.lhe ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lhe 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FLf 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jkd ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jkd 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DhD 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:18.931 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:18.932 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:18.932 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:18.932 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:18.932 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:18.932 19:17:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:19.865 Waiting for block devices as requested 00:23:20.124 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:20.124 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:20.384 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:20.384 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:20.384 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:20.643 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:20.643 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:20.643 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:20.643 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:20.902 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:20.902 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:20.902 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:20.902 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:21.160 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:21.160 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:21.160 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:21.160 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:21.727 No valid GPT data, bailing 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:21.727 19:18:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:21.727 00:23:21.727 Discovery Log Number of Records 2, Generation counter 2 00:23:21.727 =====Discovery Log Entry 0====== 00:23:21.727 trtype: tcp 00:23:21.727 adrfam: ipv4 00:23:21.727 subtype: current discovery subsystem 00:23:21.727 treq: not specified, sq flow control disable supported 00:23:21.727 portid: 1 00:23:21.727 trsvcid: 4420 00:23:21.727 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:21.727 traddr: 10.0.0.1 00:23:21.727 eflags: none 00:23:21.727 sectype: none 00:23:21.727 =====Discovery Log Entry 1====== 00:23:21.727 trtype: tcp 00:23:21.727 adrfam: ipv4 00:23:21.727 subtype: nvme subsystem 00:23:21.727 treq: not specified, sq flow control disable supported 00:23:21.727 portid: 1 00:23:21.727 trsvcid: 4420 00:23:21.727 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:21.727 traddr: 10.0.0.1 00:23:21.727 eflags: none 00:23:21.727 sectype: none 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:21.727 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.728 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.986 nvme0n1 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.986 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.987 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.244 nvme0n1 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.244 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.503 nvme0n1 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.503 nvme0n1 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.503 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.768 19:18:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.768 nvme0n1 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.768 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.026 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.027 nvme0n1 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.027 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.286 nvme0n1 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.286 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.545 nvme0n1 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.545 19:18:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.804 nvme0n1 00:23:23.804 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.804 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.804 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.805 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.064 nvme0n1 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.064 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.065 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.324 nvme0n1 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.324 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.325 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.325 19:18:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.325 19:18:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.325 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.325 19:18:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.892 nvme0n1 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.892 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.893 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.153 nvme0n1 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.153 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.413 nvme0n1 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.413 19:18:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.981 nvme0n1 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.981 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.241 nvme0n1 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.241 19:18:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.807 nvme0n1 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.807 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.808 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.376 nvme0n1 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.376 19:18:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.942 nvme0n1 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.942 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.509 nvme0n1 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.509 19:18:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.076 nvme0n1 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.076 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.335 19:18:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.271 nvme0n1 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.271 19:18:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.209 nvme0n1 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.209 19:18:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.158 nvme0n1 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.158 19:18:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.095 nvme0n1 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.095 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.353 19:18:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.291 nvme0n1 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.291 nvme0n1 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.291 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.551 nvme0n1 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:34.551 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.552 19:18:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.809 19:18:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.809 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.809 19:18:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.809 nvme0n1 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.809 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.066 nvme0n1 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:35.066 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.067 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 nvme0n1 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.323 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.324 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 nvme0n1 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.582 19:18:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 nvme0n1 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.843 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 nvme0n1 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.105 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.106 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.106 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.106 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.402 nvme0n1 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.402 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.403 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.662 nvme0n1 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.662 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.663 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.663 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.663 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.663 19:18:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.663 19:18:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.663 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.663 19:18:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.922 nvme0n1 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.922 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 nvme0n1 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.753 nvme0n1 00:23:37.753 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.753 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.753 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.753 19:18:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.753 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.753 19:18:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.753 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.033 nvme0n1 00:23:38.033 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.034 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.292 nvme0n1 00:23:38.292 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.292 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.292 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.292 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.292 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.292 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.552 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.553 19:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.123 nvme0n1 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.123 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.124 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 nvme0n1 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.695 19:18:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.695 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.266 nvme0n1 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.266 19:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.835 nvme0n1 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.835 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.403 nvme0n1 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.403 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.404 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.404 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.663 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.663 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.663 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.663 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:41.663 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.663 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.663 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.664 19:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.605 nvme0n1 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.605 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.606 19:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.545 nvme0n1 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.545 19:18:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.923 nvme0n1 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:44.923 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.924 19:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.862 nvme0n1 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.862 19:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:45.862 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.863 19:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.799 nvme0n1 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.799 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.800 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.800 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.800 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.800 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.058 nvme0n1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.058 nvme0n1 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.058 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.059 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.059 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.317 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.318 nvme0n1 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.318 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.576 nvme0n1 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.576 19:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.577 19:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.577 19:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.577 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.837 nvme0n1 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.837 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.096 nvme0n1 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.096 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.355 nvme0n1 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.355 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.613 nvme0n1 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.613 19:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.613 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.874 nvme0n1 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.874 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.134 nvme0n1 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.134 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.393 nvme0n1 00:23:49.393 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.393 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.393 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.393 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.393 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.393 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.653 19:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.920 nvme0n1 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.920 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.211 nvme0n1 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.211 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.779 nvme0n1 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.779 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.780 19:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.039 nvme0n1 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.039 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.607 nvme0n1 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.607 19:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.174 nvme0n1 00:23:52.174 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.174 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.174 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.175 19:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.743 nvme0n1 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.743 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.002 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.003 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.003 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.003 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.570 nvme0n1 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.570 19:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.135 nvme0n1 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJmYjY3MjEyZDY5ZjE5ZjczMGJhNGQwNTM3N2VhMjgn+IYx: 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmVjYjliZWFmYmNhZDRiOGFkNzE5ZDM0M2I4Nzg2Y2E2N2MxOTdkNzg1MDUyNzZiZTE3OTVlODZhMWMwMWIzNXnCYBw=: 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.135 19:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.071 nvme0n1 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.072 19:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.445 nvme0n1 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:56.445 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDhiNzIwYzE3NTI1NDY1MGMxYjI4N2IzMTE0NTU1ZGb3TxNB: 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: ]] 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjFlMGRhZWNiM2RiYTRlYmY0NTMzN2M3OTg5Y2JhNzRnm9zo: 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.446 19:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.385 nvme0n1 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGZhMzE3MTQ1Y2UxYTkwMGY1YjU1NTYyYjQ3ZjMzM2M0YzRhZDFlNTdhZjFlODVh/ODZgw==: 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzQzYmVhNzI0M2U5NWUzN2FjZGJjNmYwZjZhOTg0YzAsG7gm: 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.385 19:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 nvme0n1 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I4MWE4NGNkYTRiNTg4MDVmNWNmYjY2ZmRlZjA5MTA2YWM2YmMzNzcyZTllMjY3YWE5ZGVhY2MwYzAzY2FkN/2WJ4c=: 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.321 19:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.261 nvme0n1 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM2NjRlOTU4Y2M1NTliMTI4Yzc0YmY0YTk4NmE0ZWUwNzgyNGQ2MDVmZWE5ZDFiRLZziA==: 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: ]] 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWYzNjYxMTEyNzE4OWQ5ODViY2M0MjcwZGQxOWU2ZDJhOGM3Yjc2N2Y1ZTc4MDMz3AqnZw==: 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.261 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.521 request: 00:23:59.521 { 00:23:59.521 "name": "nvme0", 00:23:59.521 "trtype": "tcp", 00:23:59.521 "traddr": "10.0.0.1", 00:23:59.521 "adrfam": "ipv4", 00:23:59.521 "trsvcid": "4420", 00:23:59.521 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:59.521 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:59.521 "prchk_reftag": false, 00:23:59.521 "prchk_guard": false, 00:23:59.521 "hdgst": false, 00:23:59.521 "ddgst": false, 00:23:59.521 "method": "bdev_nvme_attach_controller", 00:23:59.521 "req_id": 1 00:23:59.521 } 00:23:59.521 Got JSON-RPC error response 00:23:59.521 response: 00:23:59.521 { 00:23:59.521 "code": -5, 00:23:59.521 "message": "Input/output error" 00:23:59.521 } 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.521 request: 00:23:59.521 { 00:23:59.521 "name": "nvme0", 00:23:59.521 "trtype": "tcp", 00:23:59.521 "traddr": "10.0.0.1", 00:23:59.521 "adrfam": "ipv4", 00:23:59.521 "trsvcid": "4420", 00:23:59.521 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:59.521 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:59.521 "prchk_reftag": false, 00:23:59.521 "prchk_guard": false, 00:23:59.521 "hdgst": false, 00:23:59.521 "ddgst": false, 00:23:59.521 "dhchap_key": "key2", 00:23:59.521 "method": "bdev_nvme_attach_controller", 00:23:59.521 "req_id": 1 00:23:59.521 } 00:23:59.521 Got JSON-RPC error response 00:23:59.521 response: 00:23:59.521 { 00:23:59.521 "code": -5, 00:23:59.521 "message": "Input/output error" 00:23:59.521 } 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.521 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.522 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:59.780 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:59.780 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.780 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.780 19:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.780 request: 00:23:59.780 { 00:23:59.780 "name": "nvme0", 00:23:59.780 "trtype": "tcp", 00:23:59.780 "traddr": "10.0.0.1", 00:23:59.781 "adrfam": "ipv4", 00:23:59.781 "trsvcid": "4420", 00:23:59.781 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:59.781 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:59.781 "prchk_reftag": false, 00:23:59.781 "prchk_guard": false, 00:23:59.781 "hdgst": false, 00:23:59.781 "ddgst": false, 00:23:59.781 "dhchap_key": "key1", 00:23:59.781 "dhchap_ctrlr_key": "ckey2", 00:23:59.781 "method": "bdev_nvme_attach_controller", 00:23:59.781 "req_id": 1 00:23:59.781 } 00:23:59.781 Got JSON-RPC error response 00:23:59.781 response: 00:23:59.781 { 00:23:59.781 "code": -5, 00:23:59.781 "message": "Input/output error" 00:23:59.781 } 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.781 rmmod nvme_tcp 00:23:59.781 rmmod nvme_fabrics 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3392161 ']' 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3392161 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3392161 ']' 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3392161 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3392161 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3392161' 00:23:59.781 killing process with pid 3392161 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3392161 00:23:59.781 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3392161 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.040 19:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:02.578 19:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:03.515 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:03.515 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:03.515 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:03.515 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:03.515 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:03.515 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:03.515 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:03.515 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:03.515 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:04.448 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:04.448 19:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.bJH /tmp/spdk.key-null.NVn /tmp/spdk.key-sha256.uBH /tmp/spdk.key-sha384.FLf /tmp/spdk.key-sha512.DhD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:04.448 19:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:05.862 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:05.862 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:05.862 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:05.862 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:05.862 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:05.862 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:05.862 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:05.862 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:05.862 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:05.862 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:05.862 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:05.862 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:05.862 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:05.862 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:05.862 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:05.862 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:05.862 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:05.862 00:24:05.862 real 0m49.964s 00:24:05.862 user 0m47.747s 00:24:05.862 sys 0m5.673s 00:24:05.863 19:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:05.863 19:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.863 ************************************ 00:24:05.863 END TEST nvmf_auth_host 00:24:05.863 ************************************ 00:24:05.863 19:18:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:05.863 19:18:46 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:05.863 19:18:46 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:05.863 19:18:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:05.863 19:18:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.863 19:18:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.863 ************************************ 00:24:05.863 START TEST nvmf_digest 00:24:05.863 ************************************ 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:05.863 * Looking for test storage... 00:24:05.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:05.863 19:18:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.766 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:07.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:07.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:07.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:07.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.767 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:24:08.026 00:24:08.026 --- 10.0.0.2 ping statistics --- 00:24:08.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.026 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:08.026 00:24:08.026 --- 10.0.0.1 ping statistics --- 00:24:08.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.026 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:08.026 ************************************ 00:24:08.026 START TEST nvmf_digest_clean 00:24:08.026 ************************************ 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3402314 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3402314 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3402314 ']' 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.026 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.026 [2024-07-15 19:18:48.379527] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:08.026 [2024-07-15 19:18:48.379596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.026 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.026 [2024-07-15 19:18:48.446338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.284 [2024-07-15 19:18:48.566010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.284 [2024-07-15 19:18:48.566070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.284 [2024-07-15 19:18:48.566086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.284 [2024-07-15 19:18:48.566099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.284 [2024-07-15 19:18:48.566110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.284 [2024-07-15 19:18:48.566140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.284 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.543 null0 00:24:08.543 [2024-07-15 19:18:48.761597] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.543 [2024-07-15 19:18:48.785803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3402333 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3402333 /var/tmp/bperf.sock 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3402333 ']' 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.543 19:18:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.543 [2024-07-15 19:18:48.837407] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:08.543 [2024-07-15 19:18:48.837482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402333 ] 00:24:08.543 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.543 [2024-07-15 19:18:48.904328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.803 [2024-07-15 19:18:49.024574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.373 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.631 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:09.631 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:09.631 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:09.631 19:18:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:09.889 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.889 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.453 nvme0n1 00:24:10.453 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:10.453 19:18:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:10.453 Running I/O for 2 seconds... 00:24:12.347 00:24:12.347 Latency(us) 00:24:12.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.347 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:12.347 nvme0n1 : 2.00 19662.63 76.81 0.00 0.00 6501.20 3106.89 22816.24 00:24:12.347 =================================================================================================================== 00:24:12.347 Total : 19662.63 76.81 0.00 0.00 6501.20 3106.89 22816.24 00:24:12.347 0 00:24:12.347 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:12.347 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:12.347 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:12.347 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:12.347 19:18:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:12.347 | select(.opcode=="crc32c") 00:24:12.347 | "\(.module_name) \(.executed)"' 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3402333 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3402333 ']' 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3402333 00:24:12.604 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:12.605 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.605 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402333 00:24:12.862 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:12.862 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:12.862 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402333' 00:24:12.862 killing process with pid 3402333 00:24:12.862 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3402333 00:24:12.862 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.862 00:24:12.862 Latency(us) 00:24:12.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.862 =================================================================================================================== 00:24:12.862 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.862 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3402333 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3402874 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3402874 /var/tmp/bperf.sock 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3402874 ']' 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.119 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.120 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:13.120 [2024-07-15 19:18:53.369081] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:13.120 [2024-07-15 19:18:53.369162] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402874 ] 00:24:13.120 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.120 Zero copy mechanism will not be used. 00:24:13.120 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.120 [2024-07-15 19:18:53.430714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.120 [2024-07-15 19:18:53.544124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.378 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.378 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:13.378 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:13.378 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:13.378 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:13.636 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.636 19:18:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.894 nvme0n1 00:24:13.894 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:13.894 19:18:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.153 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.153 Zero copy mechanism will not be used. 00:24:14.153 Running I/O for 2 seconds... 00:24:16.052 00:24:16.052 Latency(us) 00:24:16.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.052 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:16.052 nvme0n1 : 2.00 2806.85 350.86 0.00 0.00 5695.59 5218.61 13883.92 00:24:16.052 =================================================================================================================== 00:24:16.052 Total : 2806.85 350.86 0.00 0.00 5695.59 5218.61 13883.92 00:24:16.052 0 00:24:16.052 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:16.052 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:16.052 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:16.052 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:16.052 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:16.052 | select(.opcode=="crc32c") 00:24:16.052 | "\(.module_name) \(.executed)"' 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3402874 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3402874 ']' 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3402874 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402874 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402874' 00:24:16.310 killing process with pid 3402874 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3402874 00:24:16.310 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.310 00:24:16.310 Latency(us) 00:24:16.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.310 =================================================================================================================== 00:24:16.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.310 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3402874 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3403278 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3403278 /var/tmp/bperf.sock 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3403278 ']' 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.568 19:18:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:16.568 [2024-07-15 19:18:56.975200] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:16.569 [2024-07-15 19:18:56.975281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403278 ] 00:24:16.827 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.827 [2024-07-15 19:18:57.035752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.827 [2024-07-15 19:18:57.149474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.827 19:18:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.827 19:18:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:16.827 19:18:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:16.827 19:18:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:16.827 19:18:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:17.394 19:18:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.394 19:18:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.651 nvme0n1 00:24:17.651 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:17.651 19:18:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.908 Running I/O for 2 seconds... 00:24:19.825 00:24:19.825 Latency(us) 00:24:19.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.825 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:19.825 nvme0n1 : 2.00 20900.76 81.64 0.00 0.00 6114.81 3640.89 15825.73 00:24:19.825 =================================================================================================================== 00:24:19.825 Total : 20900.76 81.64 0.00 0.00 6114.81 3640.89 15825.73 00:24:19.825 0 00:24:19.825 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:19.825 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:19.825 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:19.825 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:19.825 | select(.opcode=="crc32c") 00:24:19.825 | "\(.module_name) \(.executed)"' 00:24:19.825 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3403278 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3403278 ']' 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3403278 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3403278 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3403278' 00:24:20.083 killing process with pid 3403278 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3403278 00:24:20.083 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.083 00:24:20.083 Latency(us) 00:24:20.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.083 =================================================================================================================== 00:24:20.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.083 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3403278 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3403807 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3403807 /var/tmp/bperf.sock 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3403807 ']' 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.341 19:19:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:20.341 [2024-07-15 19:19:00.726976] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:20.341 [2024-07-15 19:19:00.727056] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403807 ] 00:24:20.341 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:20.341 Zero copy mechanism will not be used. 00:24:20.341 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.598 [2024-07-15 19:19:00.788799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.598 [2024-07-15 19:19:00.903838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.575 19:19:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.575 19:19:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:21.575 19:19:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:21.575 19:19:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:21.575 19:19:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:21.833 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:21.833 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.090 nvme0n1 00:24:22.090 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:22.090 19:19:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.348 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.348 Zero copy mechanism will not be used. 00:24:22.348 Running I/O for 2 seconds... 00:24:24.247 00:24:24.247 Latency(us) 00:24:24.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.247 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:24.247 nvme0n1 : 2.01 1933.11 241.64 0.00 0.00 8256.40 6602.15 16214.09 00:24:24.247 =================================================================================================================== 00:24:24.247 Total : 1933.11 241.64 0.00 0.00 8256.40 6602.15 16214.09 00:24:24.247 0 00:24:24.247 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:24.247 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:24.247 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:24.247 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:24.247 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:24.247 | select(.opcode=="crc32c") 00:24:24.247 | "\(.module_name) \(.executed)"' 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3403807 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3403807 ']' 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3403807 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3403807 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3403807' 00:24:24.505 killing process with pid 3403807 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3403807 00:24:24.505 Received shutdown signal, test time was about 2.000000 seconds 00:24:24.505 00:24:24.505 Latency(us) 00:24:24.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.505 =================================================================================================================== 00:24:24.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.505 19:19:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3403807 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3402314 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3402314 ']' 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3402314 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402314 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402314' 00:24:24.762 killing process with pid 3402314 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3402314 00:24:24.762 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3402314 00:24:25.020 00:24:25.020 real 0m17.079s 00:24:25.020 user 0m34.509s 00:24:25.020 sys 0m4.047s 00:24:25.020 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.020 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:25.020 ************************************ 00:24:25.020 END TEST nvmf_digest_clean 00:24:25.020 ************************************ 00:24:25.020 19:19:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:25.020 19:19:05 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:25.020 19:19:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:25.020 19:19:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.020 19:19:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:25.279 ************************************ 00:24:25.279 START TEST nvmf_digest_error 00:24:25.279 ************************************ 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3404379 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3404379 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3404379 ']' 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.279 [2024-07-15 19:19:05.513674] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:25.279 [2024-07-15 19:19:05.513754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.279 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.279 [2024-07-15 19:19:05.576113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.279 [2024-07-15 19:19:05.681053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.279 [2024-07-15 19:19:05.681107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.279 [2024-07-15 19:19:05.681136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.279 [2024-07-15 19:19:05.681147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.279 [2024-07-15 19:19:05.681157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.279 [2024-07-15 19:19:05.681184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.279 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.537 [2024-07-15 19:19:05.737709] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.537 null0 00:24:25.537 [2024-07-15 19:19:05.856337] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.537 [2024-07-15 19:19:05.880541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3404398 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3404398 /var/tmp/bperf.sock 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3404398 ']' 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.537 19:19:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.537 [2024-07-15 19:19:05.930688] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:25.537 [2024-07-15 19:19:05.930760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404398 ] 00:24:25.537 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.794 [2024-07-15 19:19:05.989854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.794 [2024-07-15 19:19:06.098152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.794 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.794 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:25.794 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.794 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:26.357 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:26.357 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.357 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.357 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.357 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.357 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.614 nvme0n1 00:24:26.614 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:26.614 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.614 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.614 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.614 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:26.614 19:19:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:26.871 Running I/O for 2 seconds... 00:24:26.871 [2024-07-15 19:19:07.111479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.871 [2024-07-15 19:19:07.111535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.871 [2024-07-15 19:19:07.111557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.871 [2024-07-15 19:19:07.126452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.871 [2024-07-15 19:19:07.126498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.871 [2024-07-15 19:19:07.126519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.871 [2024-07-15 19:19:07.140711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.871 [2024-07-15 19:19:07.140746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.871 [2024-07-15 19:19:07.140765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.154316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.154351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.154370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.168235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.168269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.168288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.179586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.179619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.179638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.194424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.194457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.194476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.207946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.207975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.208007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.220863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.220923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.220941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.233273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.233303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.233334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.246820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.246850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.246867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.259748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.259778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.259795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.271015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.271060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.271077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.286415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.286444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.286476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.872 [2024-07-15 19:19:07.296674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:26.872 [2024-07-15 19:19:07.296705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.872 [2024-07-15 19:19:07.296737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.310288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.310316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.310348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.325379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.325409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.325426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.336361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.336389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.336420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.350632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.350663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.350686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.363025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.363057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.363073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.375570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.375602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.375619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.388272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.388301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.388318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.402396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.402423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.402453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.413392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.413420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.413453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.427278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.427322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.427339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.440072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.440103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.440121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.452045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.452075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.452092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.466250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.466287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.466304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.476987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.477016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.477034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.491376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.491406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.491423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.504031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.504060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.504077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.516680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.516709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.516725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.528535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.528564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.528581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.541066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.541096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.541114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.131 [2024-07-15 19:19:07.554371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.131 [2024-07-15 19:19:07.554400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.131 [2024-07-15 19:19:07.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.389 [2024-07-15 19:19:07.566276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.389 [2024-07-15 19:19:07.566306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.389 [2024-07-15 19:19:07.566323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.389 [2024-07-15 19:19:07.580166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.389 [2024-07-15 19:19:07.580196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.389 [2024-07-15 19:19:07.580213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.389 [2024-07-15 19:19:07.591263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.389 [2024-07-15 19:19:07.591291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.389 [2024-07-15 19:19:07.591321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.389 [2024-07-15 19:19:07.605409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.389 [2024-07-15 19:19:07.605438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.389 [2024-07-15 19:19:07.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.616642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.616671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.616687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.629935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.629966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.629983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.643436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.643466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.643483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.654985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.655015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.655031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.668384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.668414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.668445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.681801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.681838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.681871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.692862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.692912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.692929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.706434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.706462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.706493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.718842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.718870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.718909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.731302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.731330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.731347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.744768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.744798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.744815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.756554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.756583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.756599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.769328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.769357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.769373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.782060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.782089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.782106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.794342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.794386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.794403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.807160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.807204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.807221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.390 [2024-07-15 19:19:07.820060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.390 [2024-07-15 19:19:07.820089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.390 [2024-07-15 19:19:07.820106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.649 [2024-07-15 19:19:07.833061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.649 [2024-07-15 19:19:07.833091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.649 [2024-07-15 19:19:07.833108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.649 [2024-07-15 19:19:07.845339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.649 [2024-07-15 19:19:07.845369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.649 [2024-07-15 19:19:07.845385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.649 [2024-07-15 19:19:07.856749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.649 [2024-07-15 19:19:07.856777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.649 [2024-07-15 19:19:07.856808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.649 [2024-07-15 19:19:07.872063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.649 [2024-07-15 19:19:07.872092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.649 [2024-07-15 19:19:07.872124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.649 [2024-07-15 19:19:07.884522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.649 [2024-07-15 19:19:07.884554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.649 [2024-07-15 19:19:07.884571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.895453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.895481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.895521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.909289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.909318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.909334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.920991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.921020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.921052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.935682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.935710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.935726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.948339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.948368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.948385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.961731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.961759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.961776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.973161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.973191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.973207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.985517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.985547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.985564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:07.999246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:07.999290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:07.999307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:08.011498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:08.011554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:08.011572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:08.022232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:08.022274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:08.022289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:08.035772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:08.035802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:08.035817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:08.048551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:08.048580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:08.048597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:08.061000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:08.061031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:08.061047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.650 [2024-07-15 19:19:08.076277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.650 [2024-07-15 19:19:08.076324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.650 [2024-07-15 19:19:08.076342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.086567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.086594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.086625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.101018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.101047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.101080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.113427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.113471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.113488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.125517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.125547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.125580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.137278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.137306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.137338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.150468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.150511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.150526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.163943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.163973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.163990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.176600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.176629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.176662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.188640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.188667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.188698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.201981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.202011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.202027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.213982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.214011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.214028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.228922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.228952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.228976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.240709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.240739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.240756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.252508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.252536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.252567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.266498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.266532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.909 [2024-07-15 19:19:08.266550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.909 [2024-07-15 19:19:08.281020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.909 [2024-07-15 19:19:08.281050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.910 [2024-07-15 19:19:08.281067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.910 [2024-07-15 19:19:08.292704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.910 [2024-07-15 19:19:08.292736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.910 [2024-07-15 19:19:08.292754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.910 [2024-07-15 19:19:08.306151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.910 [2024-07-15 19:19:08.306193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.910 [2024-07-15 19:19:08.306212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.910 [2024-07-15 19:19:08.320613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.910 [2024-07-15 19:19:08.320646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.910 [2024-07-15 19:19:08.320665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.910 [2024-07-15 19:19:08.335242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:27.910 [2024-07-15 19:19:08.335275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.910 [2024-07-15 19:19:08.335294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.167 [2024-07-15 19:19:08.347359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.167 [2024-07-15 19:19:08.347392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.167 [2024-07-15 19:19:08.347411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.167 [2024-07-15 19:19:08.359686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.167 [2024-07-15 19:19:08.359713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.167 [2024-07-15 19:19:08.359729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.167 [2024-07-15 19:19:08.374270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.167 [2024-07-15 19:19:08.374303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.167 [2024-07-15 19:19:08.374322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.167 [2024-07-15 19:19:08.389031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.167 [2024-07-15 19:19:08.389061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.167 [2024-07-15 19:19:08.389079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.167 [2024-07-15 19:19:08.400924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.167 [2024-07-15 19:19:08.400969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.167 [2024-07-15 19:19:08.400986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.167 [2024-07-15 19:19:08.415600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.415633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.415652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.430560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.430608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.430627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.442599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.442633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.442651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.456230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.456263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.456287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.468509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.468542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.468560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.483118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.483148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.483165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.496317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.496351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.496370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.510816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.510849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.510869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.523459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.523492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.523511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.538508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.538541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.538560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.552151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.552198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.552217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.564752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.564786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.564804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.577041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.577074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.577106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.168 [2024-07-15 19:19:08.592100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.168 [2024-07-15 19:19:08.592135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.168 [2024-07-15 19:19:08.592154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.605985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.606015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.606032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.619248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.619282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.619300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.633424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.633458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.633477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.647030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.647063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.647080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.660575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.660610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.660630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.675077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.675107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.675139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.687095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.687123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.687155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.700587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.700621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.700640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.714190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.714224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.714242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.728680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.728713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.728732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.740402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.740436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.740455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.753895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.753928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.426 [2024-07-15 19:19:08.753960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.426 [2024-07-15 19:19:08.768347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.426 [2024-07-15 19:19:08.768380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.427 [2024-07-15 19:19:08.768398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.427 [2024-07-15 19:19:08.781807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.427 [2024-07-15 19:19:08.781839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.427 [2024-07-15 19:19:08.781858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.427 [2024-07-15 19:19:08.794336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.427 [2024-07-15 19:19:08.794369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.427 [2024-07-15 19:19:08.794388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.427 [2024-07-15 19:19:08.809057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.427 [2024-07-15 19:19:08.809087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.427 [2024-07-15 19:19:08.809109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.427 [2024-07-15 19:19:08.822549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.427 [2024-07-15 19:19:08.822583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.427 [2024-07-15 19:19:08.822601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.427 [2024-07-15 19:19:08.836606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.427 [2024-07-15 19:19:08.836639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.427 [2024-07-15 19:19:08.836658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.427 [2024-07-15 19:19:08.849283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.427 [2024-07-15 19:19:08.849316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.427 [2024-07-15 19:19:08.849334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.862225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.862258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.862276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.877449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.877482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.877501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.893539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.893571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.893589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.906588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.906621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.906640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.920785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.920819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.920838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.935916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.935962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.935979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.948602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.948635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.948654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.962584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.962617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.962636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.977563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.977596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.715 [2024-07-15 19:19:08.977614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.715 [2024-07-15 19:19:08.988805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.715 [2024-07-15 19:19:08.988838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:08.988857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 [2024-07-15 19:19:09.003866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.716 [2024-07-15 19:19:09.003907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:09.003941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 [2024-07-15 19:19:09.021181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.716 [2024-07-15 19:19:09.021207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:09.021239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 [2024-07-15 19:19:09.038145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.716 [2024-07-15 19:19:09.038172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:09.038203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 [2024-07-15 19:19:09.050062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.716 [2024-07-15 19:19:09.050089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:09.050124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 [2024-07-15 19:19:09.064559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.716 [2024-07-15 19:19:09.064592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:09.064610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 [2024-07-15 19:19:09.081626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.716 [2024-07-15 19:19:09.081660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:09.081679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 [2024-07-15 19:19:09.096708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1315d50) 00:24:28.716 [2024-07-15 19:19:09.096741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.716 [2024-07-15 19:19:09.096760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.716 00:24:28.716 Latency(us) 00:24:28.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.716 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:28.716 nvme0n1 : 2.00 19280.08 75.31 0.00 0.00 6629.20 3422.44 17864.63 00:24:28.716 =================================================================================================================== 00:24:28.716 Total : 19280.08 75.31 0.00 0.00 6629.20 3422.44 17864.63 00:24:28.716 0 00:24:28.716 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:28.716 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:28.716 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:28.716 | .driver_specific 00:24:28.716 | .nvme_error 00:24:28.716 | .status_code 00:24:28.716 | .command_transient_transport_error' 00:24:28.716 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3404398 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3404398 ']' 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3404398 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3404398 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3404398' 00:24:28.975 killing process with pid 3404398 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3404398 00:24:28.975 Received shutdown signal, test time was about 2.000000 seconds 00:24:28.975 00:24:28.975 Latency(us) 00:24:28.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.975 =================================================================================================================== 00:24:28.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.975 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3404398 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3404812 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3404812 /var/tmp/bperf.sock 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3404812 ']' 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.234 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.492 [2024-07-15 19:19:09.684944] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:29.492 [2024-07-15 19:19:09.685034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404812 ] 00:24:29.492 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:29.492 Zero copy mechanism will not be used. 00:24:29.492 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.492 [2024-07-15 19:19:09.745073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.492 [2024-07-15 19:19:09.853084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.750 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.750 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:29.750 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:29.750 19:19:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:30.007 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:30.007 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.007 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.007 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.007 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.007 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.265 nvme0n1 00:24:30.265 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:30.265 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.265 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.265 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.265 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:30.265 19:19:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:30.524 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:30.524 Zero copy mechanism will not be used. 00:24:30.524 Running I/O for 2 seconds... 00:24:30.524 [2024-07-15 19:19:10.776584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.776635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.776657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.788317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.788354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.788373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.799959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.799988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.800020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.811182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.811231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.811250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.822399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.822432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.822450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.833845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.833888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.833910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.845237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.845277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.845297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.856520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.856554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.856573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.867781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.867814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.867833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.878930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.878976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.878993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.890164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.890208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.890224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.901363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.901396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.901415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.912356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.912390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.912408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.923478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.923510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.923528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.934905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.934961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.934982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.524 [2024-07-15 19:19:10.946084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.524 [2024-07-15 19:19:10.946113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.524 [2024-07-15 19:19:10.946133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.783 [2024-07-15 19:19:10.957254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:10.957284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:10.957316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:10.968502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:10.968535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:10.968554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:10.979647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:10.979680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:10.979698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:10.990972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:10.991010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:10.991027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.002281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.002314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.002332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.013541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.013574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.013592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.024970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.024998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.025030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.036181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.036233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.036253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.047358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.047391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.047410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.058439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.058475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.058494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.069859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.069915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.069933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.081170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.081236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.092531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.092564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.092582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.103715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.103750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.103769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.114898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.114930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.114963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.126115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.126144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.126160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.137322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.137363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.137378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.148525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.148557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.148575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.159803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.159836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.159854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.170998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.171026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.171058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.182086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.182116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.182132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.193437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.193468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.193486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.784 [2024-07-15 19:19:11.204602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:30.784 [2024-07-15 19:19:11.204644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.784 [2024-07-15 19:19:11.204660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.216026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.216056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.216089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.227398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.227441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.227463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.238766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.238798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.238816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.249870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.249909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.249927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.261022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.261052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.261068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.272153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.272199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.272217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.283168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.283214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.283231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.294285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.294316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.294335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.305335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.305370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.305389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.316344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.316377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.316396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.327471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.327509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.327529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.043 [2024-07-15 19:19:11.338640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.043 [2024-07-15 19:19:11.338673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.043 [2024-07-15 19:19:11.338691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.349835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.349867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.349894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.361215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.361248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.361267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.372217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.372262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.372281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.383385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.383418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.383435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.394385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.394417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.394436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.405418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.405452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.405470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.416316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.416349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.416367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.427480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.427527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.427546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.438554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.438587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.438605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.449767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.449800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.449819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.460827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.460861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.460888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.044 [2024-07-15 19:19:11.471886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.044 [2024-07-15 19:19:11.471919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.044 [2024-07-15 19:19:11.471952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.482931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.482978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.482995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.494275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.494327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.505425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.505459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.505478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.516525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.516563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.516583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.527692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.527724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.527743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.538767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.538799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.538817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.549738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.549771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.549789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.561027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.561059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.561076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.572093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.572123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.572140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.583240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.583273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.583292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.594484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.594517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.594535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.605618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.605652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.605671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.616709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.616743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.616762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.627718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.627752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.627770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.638898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.638944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.638961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.649985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.650031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.661287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.661320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.661340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.672253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.302 [2024-07-15 19:19:11.672286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.302 [2024-07-15 19:19:11.672305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.302 [2024-07-15 19:19:11.683474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.303 [2024-07-15 19:19:11.683507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.303 [2024-07-15 19:19:11.683525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.303 [2024-07-15 19:19:11.694717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.303 [2024-07-15 19:19:11.694750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.303 [2024-07-15 19:19:11.694768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.303 [2024-07-15 19:19:11.705976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.303 [2024-07-15 19:19:11.706006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.303 [2024-07-15 19:19:11.706028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.303 [2024-07-15 19:19:11.717199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.303 [2024-07-15 19:19:11.717245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.303 [2024-07-15 19:19:11.717264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.303 [2024-07-15 19:19:11.728309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.303 [2024-07-15 19:19:11.728342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.303 [2024-07-15 19:19:11.728361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.739421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.739454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.739474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.750661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.750693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.750712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.761810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.761858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.761887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.772847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.772898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.772919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.783872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.783912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.783946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.794921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.794967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.794984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.806302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.806341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.806361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.817481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.817515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.817534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.828645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.828678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.828696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.839898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.839944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.839961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.850950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.850995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.851012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.862217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.862251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.862270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.873198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.873245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.561 [2024-07-15 19:19:11.873263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.561 [2024-07-15 19:19:11.884388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.561 [2024-07-15 19:19:11.884420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.884439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.895371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.895404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.895422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.906693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.906728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.906747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.917826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.917860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.917887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.929022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.929051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.929067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.939993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.940036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.940052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.951203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.951247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.951263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.962489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.962523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.962542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.973685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.973718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.973736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.562 [2024-07-15 19:19:11.984966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.562 [2024-07-15 19:19:11.984996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.562 [2024-07-15 19:19:11.985013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.821 [2024-07-15 19:19:11.996043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.821 [2024-07-15 19:19:11.996073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.821 [2024-07-15 19:19:11.996095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.821 [2024-07-15 19:19:12.007378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.821 [2024-07-15 19:19:12.007412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.821 [2024-07-15 19:19:12.007431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.821 [2024-07-15 19:19:12.018552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.821 [2024-07-15 19:19:12.018585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.821 [2024-07-15 19:19:12.018604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.821 [2024-07-15 19:19:12.029803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.821 [2024-07-15 19:19:12.029836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.821 [2024-07-15 19:19:12.029855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.821 [2024-07-15 19:19:12.040975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.821 [2024-07-15 19:19:12.041004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.821 [2024-07-15 19:19:12.041019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.821 [2024-07-15 19:19:12.051964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.821 [2024-07-15 19:19:12.051992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.052008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.063166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.063213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.063232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.074299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.074346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.074366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.085950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.085982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.085999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.097184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.097234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.097253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.108470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.108505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.108525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.119659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.119692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.119711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.130807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.130840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.130859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.142039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.142083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.142099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.153397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.153432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.153450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.164571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.164604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.164623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.175746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.175779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.175798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.186830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.186863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.186896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.198035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.198065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.198081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.209165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.209194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.209225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.220288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.220317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.220334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.231258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.231291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.231310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.822 [2024-07-15 19:19:12.242378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:31.822 [2024-07-15 19:19:12.242411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.822 [2024-07-15 19:19:12.242430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.082 [2024-07-15 19:19:12.253574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.082 [2024-07-15 19:19:12.253607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.082 [2024-07-15 19:19:12.253626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.082 [2024-07-15 19:19:12.264857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.082 [2024-07-15 19:19:12.264900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.082 [2024-07-15 19:19:12.264920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.082 [2024-07-15 19:19:12.275914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.082 [2024-07-15 19:19:12.275964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.082 [2024-07-15 19:19:12.275980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.082 [2024-07-15 19:19:12.286922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.082 [2024-07-15 19:19:12.286975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.082 [2024-07-15 19:19:12.286992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.082 [2024-07-15 19:19:12.298103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.082 [2024-07-15 19:19:12.298133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.082 [2024-07-15 19:19:12.298150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.082 [2024-07-15 19:19:12.309205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.082 [2024-07-15 19:19:12.309251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.082 [2024-07-15 19:19:12.309268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.082 [2024-07-15 19:19:12.320483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.082 [2024-07-15 19:19:12.320516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.082 [2024-07-15 19:19:12.320535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.331646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.331682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.331701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.342698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.342731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.342750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.353903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.353950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.353967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.364946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.364976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.364992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.376270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.376303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.376322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.387395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.387429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.387447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.398365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.398397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.398415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.409402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.409436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.409455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.420694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.420728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.420747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.431934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.431964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.431980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.442998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.443026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.443042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.454100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.454129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.454145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.465270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.465303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.465322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.476472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.476505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.476530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.487769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.487803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.487822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.499142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.499193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.499213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.083 [2024-07-15 19:19:12.510426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.083 [2024-07-15 19:19:12.510460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.083 [2024-07-15 19:19:12.510479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.521650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.521681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.521698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.532864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.532906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.532925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.543957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.543986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.544003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.555150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.555207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.555234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.566278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.566310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.566329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.577378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.577417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.577437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.588341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.588375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.588393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.599710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.599743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.599763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.610883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.610917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.610950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.622034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.622064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.622080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.342 [2024-07-15 19:19:12.633286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.342 [2024-07-15 19:19:12.633319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.342 [2024-07-15 19:19:12.633338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.644491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.644523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.644541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.655778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.655812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.655830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.666914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.666963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.666980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.678040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.678069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.678085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.689362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.689394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.689413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.700758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.700790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.711950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.711980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.711996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.722972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.723004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.723022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.734144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.734192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.734212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.745219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.745253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.756358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.756393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.756412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.343 [2024-07-15 19:19:12.767134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd104f0) 00:24:32.343 [2024-07-15 19:19:12.767169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.343 [2024-07-15 19:19:12.767187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.343 00:24:32.343 Latency(us) 00:24:32.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.343 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:32.343 nvme0n1 : 2.01 2768.08 346.01 0.00 0.00 5773.30 5267.15 12718.84 00:24:32.343 =================================================================================================================== 00:24:32.343 Total : 2768.08 346.01 0.00 0.00 5773.30 5267.15 12718.84 00:24:32.343 0 00:24:32.601 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:32.601 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:32.601 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:32.601 | .driver_specific 00:24:32.601 | .nvme_error 00:24:32.601 | .status_code 00:24:32.601 | .command_transient_transport_error' 00:24:32.601 19:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:32.601 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 179 > 0 )) 00:24:32.601 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3404812 00:24:32.601 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3404812 ']' 00:24:32.601 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3404812 00:24:32.601 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:32.859 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.859 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3404812 00:24:32.859 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:32.859 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:32.859 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3404812' 00:24:32.859 killing process with pid 3404812 00:24:32.859 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3404812 00:24:32.859 Received shutdown signal, test time was about 2.000000 seconds 00:24:32.859 00:24:32.860 Latency(us) 00:24:32.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.860 =================================================================================================================== 00:24:32.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.860 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3404812 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3405335 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3405335 /var/tmp/bperf.sock 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3405335 ']' 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:33.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.119 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.119 [2024-07-15 19:19:13.361218] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:33.119 [2024-07-15 19:19:13.361304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405335 ] 00:24:33.119 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.119 [2024-07-15 19:19:13.423028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.119 [2024-07-15 19:19:13.541033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.378 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.378 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:33.378 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:33.378 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:33.636 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:33.636 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.636 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.636 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.636 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:33.636 19:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:34.204 nvme0n1 00:24:34.204 19:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:34.204 19:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.204 19:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.204 19:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.204 19:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:34.204 19:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:34.204 Running I/O for 2 seconds... 00:24:34.204 [2024-07-15 19:19:14.482358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ed920 00:24:34.204 [2024-07-15 19:19:14.483520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.204 [2024-07-15 19:19:14.483568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.204 [2024-07-15 19:19:14.495639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190eea00 00:24:34.204 [2024-07-15 19:19:14.496772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.204 [2024-07-15 19:19:14.496806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.204 [2024-07-15 19:19:14.508727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190efae0 00:24:34.204 [2024-07-15 19:19:14.509896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.204 [2024-07-15 19:19:14.509954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.204 [2024-07-15 19:19:14.521756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f0bc0 00:24:34.204 [2024-07-15 19:19:14.522919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.204 [2024-07-15 19:19:14.522955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.204 [2024-07-15 19:19:14.535089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f1ca0 00:24:34.205 [2024-07-15 19:19:14.536245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.536277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.205 [2024-07-15 19:19:14.547721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f2d80 00:24:34.205 [2024-07-15 19:19:14.548778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.548808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.205 [2024-07-15 19:19:14.560548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f3e60 00:24:34.205 [2024-07-15 19:19:14.561684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.561716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.205 [2024-07-15 19:19:14.573122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e73e0 00:24:34.205 [2024-07-15 19:19:14.574280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.574313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.205 [2024-07-15 19:19:14.585845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6300 00:24:34.205 [2024-07-15 19:19:14.587007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.587035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.205 [2024-07-15 19:19:14.598683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5220 00:24:34.205 [2024-07-15 19:19:14.599833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.599865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.205 [2024-07-15 19:19:14.611595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190dfdc0 00:24:34.205 [2024-07-15 19:19:14.612737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.612768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.205 [2024-07-15 19:19:14.624530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e0ea0 00:24:34.205 [2024-07-15 19:19:14.625651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.205 [2024-07-15 19:19:14.625683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.637272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e1f80 00:24:34.466 [2024-07-15 19:19:14.638441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.638473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.650068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e3060 00:24:34.466 [2024-07-15 19:19:14.651287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.651318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.662765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e4140 00:24:34.466 [2024-07-15 19:19:14.663894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.663926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.675536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ebb98 00:24:34.466 [2024-07-15 19:19:14.676645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.676676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.688307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ecc78 00:24:34.466 [2024-07-15 19:19:14.689438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.689469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.701058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190edd58 00:24:34.466 [2024-07-15 19:19:14.702189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.702232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.713772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190eee38 00:24:34.466 [2024-07-15 19:19:14.714937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.714966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.726586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190eff18 00:24:34.466 [2024-07-15 19:19:14.727702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.727733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.739301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f0ff8 00:24:34.466 [2024-07-15 19:19:14.740443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.740481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.752132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f20d8 00:24:34.466 [2024-07-15 19:19:14.753339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.753372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.764969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f31b8 00:24:34.466 [2024-07-15 19:19:14.766121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.766150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.777701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f4298 00:24:34.466 [2024-07-15 19:19:14.778833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.778866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.790408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6738 00:24:34.466 [2024-07-15 19:19:14.791548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.791581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.803196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5658 00:24:34.466 [2024-07-15 19:19:14.804330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.804362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.815867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190df988 00:24:34.466 [2024-07-15 19:19:14.817026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.817064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.828642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e0a68 00:24:34.466 [2024-07-15 19:19:14.829760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.829792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.841348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e1b48 00:24:34.466 [2024-07-15 19:19:14.842491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.842522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.854045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e2c28 00:24:34.466 [2024-07-15 19:19:14.855266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.855297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.866745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e3d08 00:24:34.466 [2024-07-15 19:19:14.867864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.867902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.879495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e4de8 00:24:34.466 [2024-07-15 19:19:14.880612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.880643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.466 [2024-07-15 19:19:14.892276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ec840 00:24:34.466 [2024-07-15 19:19:14.893406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.466 [2024-07-15 19:19:14.893437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.726 [2024-07-15 19:19:14.904894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ed920 00:24:34.726 [2024-07-15 19:19:14.906103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.726 [2024-07-15 19:19:14.906132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.726 [2024-07-15 19:19:14.917645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190eea00 00:24:34.726 [2024-07-15 19:19:14.918783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.726 [2024-07-15 19:19:14.918815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:14.930429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190efae0 00:24:34.727 [2024-07-15 19:19:14.931566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:14.931597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:14.942855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f0bc0 00:24:34.727 [2024-07-15 19:19:14.943884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:14.943923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:14.955592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f1ca0 00:24:34.727 [2024-07-15 19:19:14.956735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:14.956766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:14.968298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f2d80 00:24:34.727 [2024-07-15 19:19:14.969440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:14.969471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:14.981120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f3e60 00:24:34.727 [2024-07-15 19:19:14.982288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:14.982319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:14.993783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e73e0 00:24:34.727 [2024-07-15 19:19:14.994949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:14.994977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.006055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6300 00:24:34.727 [2024-07-15 19:19:15.007082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.007111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.017787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5220 00:24:34.727 [2024-07-15 19:19:15.018885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.018913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.029616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190dfdc0 00:24:34.727 [2024-07-15 19:19:15.030664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.030692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.041413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e0ea0 00:24:34.727 [2024-07-15 19:19:15.042448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.042476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.053075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e1f80 00:24:34.727 [2024-07-15 19:19:15.054092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.054120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.064819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e3060 00:24:34.727 [2024-07-15 19:19:15.065916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.065945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.076598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e4140 00:24:34.727 [2024-07-15 19:19:15.077630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.077658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.088431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ebb98 00:24:34.727 [2024-07-15 19:19:15.089570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.089598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.100245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ecc78 00:24:34.727 [2024-07-15 19:19:15.101296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.101325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.111978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190edd58 00:24:34.727 [2024-07-15 19:19:15.112974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.113012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.124010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fef90 00:24:34.727 [2024-07-15 19:19:15.125258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.125286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.135852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fd208 00:24:34.727 [2024-07-15 19:19:15.137086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.137114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.727 [2024-07-15 19:19:15.147617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fc128 00:24:34.727 [2024-07-15 19:19:15.148859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.727 [2024-07-15 19:19:15.148896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.159546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ea248 00:24:34.994 [2024-07-15 19:19:15.160794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.160822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.171382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e9168 00:24:34.994 [2024-07-15 19:19:15.172677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.172705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.183171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e8088 00:24:34.994 [2024-07-15 19:19:15.184429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.184457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.194978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6b70 00:24:34.994 [2024-07-15 19:19:15.196159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.196187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.206694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5a90 00:24:34.994 [2024-07-15 19:19:15.207951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.207980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.218475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190df550 00:24:34.994 [2024-07-15 19:19:15.219674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.219703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.230220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e0630 00:24:34.994 [2024-07-15 19:19:15.231427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.231455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.241896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e1710 00:24:34.994 [2024-07-15 19:19:15.243147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.243182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.253572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f8618 00:24:34.994 [2024-07-15 19:19:15.254761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.254798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.265438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f96f8 00:24:34.994 [2024-07-15 19:19:15.266717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.266747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.277335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fa7d8 00:24:34.994 [2024-07-15 19:19:15.278651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.278680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.289182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fb8b8 00:24:34.994 [2024-07-15 19:19:15.290383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.290411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.300855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190de470 00:24:34.994 [2024-07-15 19:19:15.302104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.302133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.312582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fdeb0 00:24:34.994 [2024-07-15 19:19:15.313832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.313862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.324443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190feb58 00:24:34.994 [2024-07-15 19:19:15.325629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.325658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.336241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fcdd0 00:24:34.994 [2024-07-15 19:19:15.337497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.337526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.347947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190eaef0 00:24:34.994 [2024-07-15 19:19:15.349190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.994 [2024-07-15 19:19:15.349218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.994 [2024-07-15 19:19:15.359614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e9e10 00:24:34.994 [2024-07-15 19:19:15.360814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.995 [2024-07-15 19:19:15.360843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.995 [2024-07-15 19:19:15.371582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e8d30 00:24:34.995 [2024-07-15 19:19:15.372774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.995 [2024-07-15 19:19:15.372802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.995 [2024-07-15 19:19:15.383409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e7c50 00:24:34.995 [2024-07-15 19:19:15.384617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.995 [2024-07-15 19:19:15.384644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.995 [2024-07-15 19:19:15.395261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5ec8 00:24:34.995 [2024-07-15 19:19:15.396509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.995 [2024-07-15 19:19:15.396537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.995 [2024-07-15 19:19:15.407094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190df118 00:24:34.995 [2024-07-15 19:19:15.408366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.995 [2024-07-15 19:19:15.408394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.995 [2024-07-15 19:19:15.418805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e01f8 00:24:34.995 [2024-07-15 19:19:15.420035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.995 [2024-07-15 19:19:15.420063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.430595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e12d8 00:24:35.255 [2024-07-15 19:19:15.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.431887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.442438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e23b8 00:24:35.255 [2024-07-15 19:19:15.443706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.443735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.454322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f92c0 00:24:35.255 [2024-07-15 19:19:15.455602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.455636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.466106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fa3a0 00:24:35.255 [2024-07-15 19:19:15.467356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.467384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.477829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fb480 00:24:35.255 [2024-07-15 19:19:15.479042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.479073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.489592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190de038 00:24:35.255 [2024-07-15 19:19:15.490790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.490818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.501479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fda78 00:24:35.255 [2024-07-15 19:19:15.502720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.502748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.513358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fef90 00:24:35.255 [2024-07-15 19:19:15.514591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.255 [2024-07-15 19:19:15.514631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.255 [2024-07-15 19:19:15.525287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fd208 00:24:35.256 [2024-07-15 19:19:15.526560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.526590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.537161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fc128 00:24:35.256 [2024-07-15 19:19:15.538478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.538507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.549321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ea248 00:24:35.256 [2024-07-15 19:19:15.550591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.550627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.561194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e9168 00:24:35.256 [2024-07-15 19:19:15.562413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.562442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.572934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e8088 00:24:35.256 [2024-07-15 19:19:15.574176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.574205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.584645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6b70 00:24:35.256 [2024-07-15 19:19:15.585827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.585855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.596525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5a90 00:24:35.256 [2024-07-15 19:19:15.597763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.597790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.608331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190df550 00:24:35.256 [2024-07-15 19:19:15.609601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.609630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.620162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e0630 00:24:35.256 [2024-07-15 19:19:15.621372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.621401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.631842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e1710 00:24:35.256 [2024-07-15 19:19:15.633036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.633065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.643644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f8618 00:24:35.256 [2024-07-15 19:19:15.644899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.644936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.655453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f96f8 00:24:35.256 [2024-07-15 19:19:15.656727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.656755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.667333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fa7d8 00:24:35.256 [2024-07-15 19:19:15.668613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.668642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.256 [2024-07-15 19:19:15.679138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fb8b8 00:24:35.256 [2024-07-15 19:19:15.680320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.256 [2024-07-15 19:19:15.680348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.517 [2024-07-15 19:19:15.690903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190de470 00:24:35.517 [2024-07-15 19:19:15.692082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.517 [2024-07-15 19:19:15.692110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.517 [2024-07-15 19:19:15.702645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fdeb0 00:24:35.517 [2024-07-15 19:19:15.703897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.517 [2024-07-15 19:19:15.703924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.517 [2024-07-15 19:19:15.714584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190feb58 00:24:35.517 [2024-07-15 19:19:15.715782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.715811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.726305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fcdd0 00:24:35.518 [2024-07-15 19:19:15.727504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.727532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.738089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190eaef0 00:24:35.518 [2024-07-15 19:19:15.739294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.739321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.749728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e9e10 00:24:35.518 [2024-07-15 19:19:15.750973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.751000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.761538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e8d30 00:24:35.518 [2024-07-15 19:19:15.762807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.762836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.773371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e7c50 00:24:35.518 [2024-07-15 19:19:15.774537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.774565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.785232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5ec8 00:24:35.518 [2024-07-15 19:19:15.786552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.786581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.797193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190df118 00:24:35.518 [2024-07-15 19:19:15.798506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.798535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.808960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e01f8 00:24:35.518 [2024-07-15 19:19:15.810155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.810189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.820641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e12d8 00:24:35.518 [2024-07-15 19:19:15.821827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.821855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.832584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e23b8 00:24:35.518 [2024-07-15 19:19:15.833833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.833861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.844325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f92c0 00:24:35.518 [2024-07-15 19:19:15.845616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.845644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.856114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fa3a0 00:24:35.518 [2024-07-15 19:19:15.857373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.857408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.867958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fb480 00:24:35.518 [2024-07-15 19:19:15.869142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.869171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.879682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190de038 00:24:35.518 [2024-07-15 19:19:15.880924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.880952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.891490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fda78 00:24:35.518 [2024-07-15 19:19:15.892753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.892781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.903263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fef90 00:24:35.518 [2024-07-15 19:19:15.904518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.904546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.915265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fd208 00:24:35.518 [2024-07-15 19:19:15.916545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.916576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.928053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190fc128 00:24:35.518 [2024-07-15 19:19:15.929354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.929384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.518 [2024-07-15 19:19:15.940807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ea248 00:24:35.518 [2024-07-15 19:19:15.942140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.518 [2024-07-15 19:19:15.942184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.779 [2024-07-15 19:19:15.953641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e9168 00:24:35.779 [2024-07-15 19:19:15.954970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.779 [2024-07-15 19:19:15.954997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.779 [2024-07-15 19:19:15.966434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e8088 00:24:35.780 [2024-07-15 19:19:15.967743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:15.967773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:15.979258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6b70 00:24:35.780 [2024-07-15 19:19:15.980576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:15.980606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:15.992129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5a90 00:24:35.780 [2024-07-15 19:19:15.993419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:15.993449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.004783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190df550 00:24:35.780 [2024-07-15 19:19:16.006109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.006136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.017457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e0630 00:24:35.780 [2024-07-15 19:19:16.018793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.018823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.030211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e1710 00:24:35.780 [2024-07-15 19:19:16.031483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.031514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.043350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ee5c8 00:24:35.780 [2024-07-15 19:19:16.044780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.044812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.056291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ef6a8 00:24:35.780 [2024-07-15 19:19:16.057788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.057819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.069049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e73e0 00:24:35.780 [2024-07-15 19:19:16.070499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.070530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.081805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f3e60 00:24:35.780 [2024-07-15 19:19:16.083315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.083345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.094516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f2d80 00:24:35.780 [2024-07-15 19:19:16.096106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.096134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.107277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f1ca0 00:24:35.780 [2024-07-15 19:19:16.108762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.108793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.120017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f0bc0 00:24:35.780 [2024-07-15 19:19:16.121476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.121506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.132686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f57b0 00:24:35.780 [2024-07-15 19:19:16.134236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.134266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.145340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f6890 00:24:35.780 [2024-07-15 19:19:16.146812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.146842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.157997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f7970 00:24:35.780 [2024-07-15 19:19:16.159466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.159496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.170689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e88f8 00:24:35.780 [2024-07-15 19:19:16.172261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.172291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.183467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e7818 00:24:35.780 [2024-07-15 19:19:16.185003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.185035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.196243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6300 00:24:35.780 [2024-07-15 19:19:16.197731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.197761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:35.780 [2024-07-15 19:19:16.209030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5220 00:24:35.780 [2024-07-15 19:19:16.210511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.780 [2024-07-15 19:19:16.210540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.221698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190dfdc0 00:24:36.040 [2024-07-15 19:19:16.223260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.223291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.234426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ed0b0 00:24:36.040 [2024-07-15 19:19:16.235943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.235970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.247232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ee190 00:24:36.040 [2024-07-15 19:19:16.248710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.248739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.259997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ef270 00:24:36.040 [2024-07-15 19:19:16.261441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.261471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.272701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f0350 00:24:36.040 [2024-07-15 19:19:16.274205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.274235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.285317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f4298 00:24:36.040 [2024-07-15 19:19:16.286776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.286807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.298107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f31b8 00:24:36.040 [2024-07-15 19:19:16.299589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.299620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.310889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f20d8 00:24:36.040 [2024-07-15 19:19:16.312362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.312392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.323589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f0ff8 00:24:36.040 [2024-07-15 19:19:16.325230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.325260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.336290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f4b08 00:24:36.040 [2024-07-15 19:19:16.337770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.337800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.349087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f5be8 00:24:36.040 [2024-07-15 19:19:16.350524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.350554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.361645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f6cc8 00:24:36.040 [2024-07-15 19:19:16.363191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.363222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.374356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190f7da8 00:24:36.040 [2024-07-15 19:19:16.375857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.375893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.387132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e84c0 00:24:36.040 [2024-07-15 19:19:16.388591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.388621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.399840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e6738 00:24:36.040 [2024-07-15 19:19:16.401324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.401354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.412548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e5658 00:24:36.040 [2024-07-15 19:19:16.414123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.414151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.425292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190df988 00:24:36.040 [2024-07-15 19:19:16.426775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.040 [2024-07-15 19:19:16.426804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.040 [2024-07-15 19:19:16.438026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190e0a68 00:24:36.041 [2024-07-15 19:19:16.439495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.041 [2024-07-15 19:19:16.439524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.041 [2024-07-15 19:19:16.450723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ed4e8 00:24:36.041 [2024-07-15 19:19:16.452225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.041 [2024-07-15 19:19:16.452255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.041 [2024-07-15 19:19:16.463443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ee5c8 00:24:36.041 [2024-07-15 19:19:16.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.041 [2024-07-15 19:19:16.464988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.300 [2024-07-15 19:19:16.476170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a6b0) with pdu=0x2000190ef6a8 00:24:36.300 [2024-07-15 19:19:16.477597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.300 [2024-07-15 19:19:16.477628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:36.300 00:24:36.300 Latency(us) 00:24:36.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.300 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:36.300 nvme0n1 : 2.01 20750.81 81.06 0.00 0.00 6158.10 2900.57 13301.38 00:24:36.300 =================================================================================================================== 00:24:36.300 Total : 20750.81 81.06 0.00 0.00 6158.10 2900.57 13301.38 00:24:36.300 0 00:24:36.300 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:36.300 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:36.300 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:36.300 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:36.300 | .driver_specific 00:24:36.300 | .nvme_error 00:24:36.300 | .status_code 00:24:36.300 | .command_transient_transport_error' 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3405335 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3405335 ']' 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3405335 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3405335 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3405335' 00:24:36.561 killing process with pid 3405335 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3405335 00:24:36.561 Received shutdown signal, test time was about 2.000000 seconds 00:24:36.561 00:24:36.561 Latency(us) 00:24:36.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.561 =================================================================================================================== 00:24:36.561 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.561 19:19:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3405335 00:24:36.819 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:36.819 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:36.819 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:36.819 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:36.819 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:36.819 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3405749 00:24:36.819 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:36.820 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3405749 /var/tmp/bperf.sock 00:24:36.820 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3405749 ']' 00:24:36.820 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:36.820 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.820 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:36.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:36.820 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.820 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:36.820 [2024-07-15 19:19:17.093135] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:36.820 [2024-07-15 19:19:17.093221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405749 ] 00:24:36.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:36.820 Zero copy mechanism will not be used. 00:24:36.820 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.820 [2024-07-15 19:19:17.150657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.077 [2024-07-15 19:19:17.260114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.077 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.077 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:37.077 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:37.077 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:37.334 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:37.334 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.334 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:37.334 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.334 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.334 19:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.591 nvme0n1 00:24:37.850 19:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:37.850 19:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.850 19:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:37.850 19:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.850 19:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:37.850 19:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:37.850 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:37.850 Zero copy mechanism will not be used. 00:24:37.850 Running I/O for 2 seconds... 00:24:37.850 [2024-07-15 19:19:18.171543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:37.850 [2024-07-15 19:19:18.171975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.850 [2024-07-15 19:19:18.172039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.851 [2024-07-15 19:19:18.187841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:37.851 [2024-07-15 19:19:18.188244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.851 [2024-07-15 19:19:18.188292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.851 [2024-07-15 19:19:18.206017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:37.851 [2024-07-15 19:19:18.206407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.851 [2024-07-15 19:19:18.206436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.851 [2024-07-15 19:19:18.224237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:37.851 [2024-07-15 19:19:18.224730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.851 [2024-07-15 19:19:18.224788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.851 [2024-07-15 19:19:18.242335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:37.851 [2024-07-15 19:19:18.242727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.851 [2024-07-15 19:19:18.242771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.851 [2024-07-15 19:19:18.259683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:37.851 [2024-07-15 19:19:18.260079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.851 [2024-07-15 19:19:18.260123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:37.851 [2024-07-15 19:19:18.276550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:37.851 [2024-07-15 19:19:18.276990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.851 [2024-07-15 19:19:18.277034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.292931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.293301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.293330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.310215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.310585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.310627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.326276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.326666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.326708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.343522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.343957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.343985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.361269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.361671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.361715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.378116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.378641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.378685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.395309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.395715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.395758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.411772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.412226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.412254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.428803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.429195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.429225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.444918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.445374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.445401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.462043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.462535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.462578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.478074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.478526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.478554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.494304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.494684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.494711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.510051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.510447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.510475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.111 [2024-07-15 19:19:18.527006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.111 [2024-07-15 19:19:18.527359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.111 [2024-07-15 19:19:18.527402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.543984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.544245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.544273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.561411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.561911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.561938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.578702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.579083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.579113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.595543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.595931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.595960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.612720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.613186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.613229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.629885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.630268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.630296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.645814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.646206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.646234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.662351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.662823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.662873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.679636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.680075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.680119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.696095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.696483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.696512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.711497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.711956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.711982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.727748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.728267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.728308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.744120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.744492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.744521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.760402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.760822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.760852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.777518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.777924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.777953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.370 [2024-07-15 19:19:18.794178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.370 [2024-07-15 19:19:18.794564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.370 [2024-07-15 19:19:18.794590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.630 [2024-07-15 19:19:18.810615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.630 [2024-07-15 19:19:18.810974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.630 [2024-07-15 19:19:18.811003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.630 [2024-07-15 19:19:18.828324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.630 [2024-07-15 19:19:18.828541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.630 [2024-07-15 19:19:18.828569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.630 [2024-07-15 19:19:18.844523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.630 [2024-07-15 19:19:18.844937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.630 [2024-07-15 19:19:18.844982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.630 [2024-07-15 19:19:18.861308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.630 [2024-07-15 19:19:18.861688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.630 [2024-07-15 19:19:18.861731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.630 [2024-07-15 19:19:18.875986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.630 [2024-07-15 19:19:18.876372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.876414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:18.893055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:18.893433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.893478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:18.909496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:18.909979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.910020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:18.927973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:18.928420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.928447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:18.946092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:18.946448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.946477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:18.963001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:18.963358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.963387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:18.980052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:18.980424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.980466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:18.995871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:18.996250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:18.996279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:19.011785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:19.012154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:19.012184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:19.028104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:19.028516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:19.028545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:19.043857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:19.044124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:19.044152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.631 [2024-07-15 19:19:19.060412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.631 [2024-07-15 19:19:19.060765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.631 [2024-07-15 19:19:19.060794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.077606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.078011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.078056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.093607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.093995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.094048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.110303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.110695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.110741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.126440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.126889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.142800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.143198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.143227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.159075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.159432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.159461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.175465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.175797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.175823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.191223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.191641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.191687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.208857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.209227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.209256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.224937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.891 [2024-07-15 19:19:19.225366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.891 [2024-07-15 19:19:19.225410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.891 [2024-07-15 19:19:19.241914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.892 [2024-07-15 19:19:19.242379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.892 [2024-07-15 19:19:19.242408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.892 [2024-07-15 19:19:19.259197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.892 [2024-07-15 19:19:19.259650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.892 [2024-07-15 19:19:19.259692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.892 [2024-07-15 19:19:19.276229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.892 [2024-07-15 19:19:19.276557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.892 [2024-07-15 19:19:19.276585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.892 [2024-07-15 19:19:19.292533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.892 [2024-07-15 19:19:19.292930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.892 [2024-07-15 19:19:19.292959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.892 [2024-07-15 19:19:19.308186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:38.892 [2024-07-15 19:19:19.308565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.892 [2024-07-15 19:19:19.308608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.323510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.323920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.323950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.339968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.340368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.356270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.356735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.356776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.371918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.372361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.372399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.388968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.389399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.389441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.406004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.406466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.406509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.423162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.423562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.423590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.438351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.438736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.438764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.455503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.455936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.455980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.151 [2024-07-15 19:19:19.472088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.151 [2024-07-15 19:19:19.472445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.151 [2024-07-15 19:19:19.472473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.152 [2024-07-15 19:19:19.489449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.152 [2024-07-15 19:19:19.489846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.152 [2024-07-15 19:19:19.489882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.152 [2024-07-15 19:19:19.506596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.152 [2024-07-15 19:19:19.506984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.152 [2024-07-15 19:19:19.507032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.152 [2024-07-15 19:19:19.523165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.152 [2024-07-15 19:19:19.523596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.152 [2024-07-15 19:19:19.523625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.152 [2024-07-15 19:19:19.539719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.152 [2024-07-15 19:19:19.540108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.152 [2024-07-15 19:19:19.540137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.152 [2024-07-15 19:19:19.556549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.152 [2024-07-15 19:19:19.556988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.152 [2024-07-15 19:19:19.557031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.152 [2024-07-15 19:19:19.572128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.152 [2024-07-15 19:19:19.572481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.152 [2024-07-15 19:19:19.572524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.588564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.588970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.589014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.605485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.605923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.605965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.621742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.622029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.622058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.639202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.639573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.639601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.655337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.655810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.655837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.670789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.671170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.671211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.686045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.686414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.686442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.703019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.703446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.703488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.718786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.719168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.719214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.735638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.736014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.736043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.753338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.753720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.753749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.769039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.769412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.769441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.785124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.785586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.785628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.800938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.801309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.801359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.817320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.817719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.817761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.412 [2024-07-15 19:19:19.833515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.412 [2024-07-15 19:19:19.833926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.412 [2024-07-15 19:19:19.833954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.672 [2024-07-15 19:19:19.849727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.672 [2024-07-15 19:19:19.850090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.672 [2024-07-15 19:19:19.850118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.672 [2024-07-15 19:19:19.865388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.672 [2024-07-15 19:19:19.865795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.672 [2024-07-15 19:19:19.865838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.672 [2024-07-15 19:19:19.881888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.672 [2024-07-15 19:19:19.882253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.672 [2024-07-15 19:19:19.882296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.672 [2024-07-15 19:19:19.898005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.672 [2024-07-15 19:19:19.898381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.672 [2024-07-15 19:19:19.898409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:19.914358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:19.914776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:19.914804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:19.929974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:19.930329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:19.930357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:19.945719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:19.946113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:19.946157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:19.962780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:19.963218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:19.963247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:19.979550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:19.979981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:19.980010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:19.996392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:19.996807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:19.996836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:20.012214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:20.012601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:20.012652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:20.027248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:20.027677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:20.027717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:20.041369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:20.041746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:20.041798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:20.055081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:20.055445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:20.055493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:20.070518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:20.070925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:20.070954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:20.086788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:20.087309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:20.087350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.673 [2024-07-15 19:19:20.103241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.673 [2024-07-15 19:19:20.103729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.673 [2024-07-15 19:19:20.103773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.932 [2024-07-15 19:19:20.120281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.932 [2024-07-15 19:19:20.120662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.932 [2024-07-15 19:19:20.120690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.932 [2024-07-15 19:19:20.137705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.932 [2024-07-15 19:19:20.138125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.932 [2024-07-15 19:19:20.138166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.932 [2024-07-15 19:19:20.155598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x89faf0) with pdu=0x2000190fef90 00:24:39.932 [2024-07-15 19:19:20.155975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.932 [2024-07-15 19:19:20.156018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.932 00:24:39.932 Latency(us) 00:24:39.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.932 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:39.932 nvme0n1 : 2.01 1877.45 234.68 0.00 0.00 8499.52 2949.12 18641.35 00:24:39.932 =================================================================================================================== 00:24:39.932 Total : 1877.45 234.68 0.00 0.00 8499.52 2949.12 18641.35 00:24:39.932 0 00:24:39.932 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:39.932 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:39.932 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:39.932 | .driver_specific 00:24:39.932 | .nvme_error 00:24:39.932 | .status_code 00:24:39.932 | .command_transient_transport_error' 00:24:39.932 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3405749 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3405749 ']' 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3405749 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3405749 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3405749' 00:24:40.191 killing process with pid 3405749 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3405749 00:24:40.191 Received shutdown signal, test time was about 2.000000 seconds 00:24:40.191 00:24:40.191 Latency(us) 00:24:40.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.191 =================================================================================================================== 00:24:40.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.191 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3405749 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3404379 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3404379 ']' 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3404379 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3404379 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3404379' 00:24:40.451 killing process with pid 3404379 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3404379 00:24:40.451 19:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3404379 00:24:40.710 00:24:40.710 real 0m15.603s 00:24:40.711 user 0m31.418s 00:24:40.711 sys 0m3.807s 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:40.711 ************************************ 00:24:40.711 END TEST nvmf_digest_error 00:24:40.711 ************************************ 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:40.711 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:40.711 rmmod nvme_tcp 00:24:40.711 rmmod nvme_fabrics 00:24:40.711 rmmod nvme_keyring 00:24:40.969 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:40.969 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:40.969 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:40.969 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3404379 ']' 00:24:40.969 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3404379 00:24:40.969 19:19:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3404379 ']' 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3404379 00:24:40.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3404379) - No such process 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3404379 is not found' 00:24:40.970 Process with pid 3404379 is not found 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.970 19:19:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.870 19:19:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:42.870 00:24:42.870 real 0m37.101s 00:24:42.870 user 1m6.772s 00:24:42.870 sys 0m9.407s 00:24:42.870 19:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:42.870 19:19:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:42.870 ************************************ 00:24:42.870 END TEST nvmf_digest 00:24:42.870 ************************************ 00:24:42.871 19:19:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:42.871 19:19:23 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:42.871 19:19:23 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:42.871 19:19:23 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:42.871 19:19:23 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:42.871 19:19:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:42.871 19:19:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.871 19:19:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:42.871 ************************************ 00:24:42.871 START TEST nvmf_bdevperf 00:24:42.871 ************************************ 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:42.871 * Looking for test storage... 00:24:42.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.871 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.127 19:19:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:45.081 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:45.081 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.081 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:45.082 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:45.082 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:24:45.082 00:24:45.082 --- 10.0.0.2 ping statistics --- 00:24:45.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.082 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:24:45.082 00:24:45.082 --- 10.0.0.1 ping statistics --- 00:24:45.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.082 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3408098 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3408098 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3408098 ']' 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.082 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.082 [2024-07-15 19:19:25.427050] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:45.082 [2024-07-15 19:19:25.427136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.082 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.082 [2024-07-15 19:19:25.490097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:45.341 [2024-07-15 19:19:25.600253] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.341 [2024-07-15 19:19:25.600312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.341 [2024-07-15 19:19:25.600344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.341 [2024-07-15 19:19:25.600365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.341 [2024-07-15 19:19:25.600375] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.341 [2024-07-15 19:19:25.600719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.341 [2024-07-15 19:19:25.600777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.341 [2024-07-15 19:19:25.600774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.341 [2024-07-15 19:19:25.749339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.341 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.600 Malloc0 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:45.600 [2024-07-15 19:19:25.808567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:45.600 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:45.600 { 00:24:45.600 "params": { 00:24:45.600 "name": "Nvme$subsystem", 00:24:45.600 "trtype": "$TEST_TRANSPORT", 00:24:45.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:45.600 "adrfam": "ipv4", 00:24:45.600 "trsvcid": "$NVMF_PORT", 00:24:45.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:45.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:45.600 "hdgst": ${hdgst:-false}, 00:24:45.600 "ddgst": ${ddgst:-false} 00:24:45.600 }, 00:24:45.600 "method": "bdev_nvme_attach_controller" 00:24:45.600 } 00:24:45.600 EOF 00:24:45.600 )") 00:24:45.601 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:45.601 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:45.601 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:45.601 19:19:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:45.601 "params": { 00:24:45.601 "name": "Nvme1", 00:24:45.601 "trtype": "tcp", 00:24:45.601 "traddr": "10.0.0.2", 00:24:45.601 "adrfam": "ipv4", 00:24:45.601 "trsvcid": "4420", 00:24:45.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.601 "hdgst": false, 00:24:45.601 "ddgst": false 00:24:45.601 }, 00:24:45.601 "method": "bdev_nvme_attach_controller" 00:24:45.601 }' 00:24:45.601 [2024-07-15 19:19:25.858188] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:45.601 [2024-07-15 19:19:25.858266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408241 ] 00:24:45.601 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.601 [2024-07-15 19:19:25.917348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.601 [2024-07-15 19:19:26.030512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.860 Running I/O for 1 seconds... 00:24:46.797 00:24:46.797 Latency(us) 00:24:46.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.797 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:46.797 Verification LBA range: start 0x0 length 0x4000 00:24:46.797 Nvme1n1 : 1.01 8764.33 34.24 0.00 0.00 14542.93 2876.30 15825.73 00:24:46.797 =================================================================================================================== 00:24:46.797 Total : 8764.33 34.24 0.00 0.00 14542.93 2876.30 15825.73 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3408383 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:47.092 { 00:24:47.092 "params": { 00:24:47.092 "name": "Nvme$subsystem", 00:24:47.092 "trtype": "$TEST_TRANSPORT", 00:24:47.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:47.092 "adrfam": "ipv4", 00:24:47.092 "trsvcid": "$NVMF_PORT", 00:24:47.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:47.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:47.092 "hdgst": ${hdgst:-false}, 00:24:47.092 "ddgst": ${ddgst:-false} 00:24:47.092 }, 00:24:47.092 "method": "bdev_nvme_attach_controller" 00:24:47.092 } 00:24:47.092 EOF 00:24:47.092 )") 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:47.092 19:19:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:47.092 "params": { 00:24:47.092 "name": "Nvme1", 00:24:47.092 "trtype": "tcp", 00:24:47.092 "traddr": "10.0.0.2", 00:24:47.092 "adrfam": "ipv4", 00:24:47.092 "trsvcid": "4420", 00:24:47.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.092 "hdgst": false, 00:24:47.092 "ddgst": false 00:24:47.092 }, 00:24:47.092 "method": "bdev_nvme_attach_controller" 00:24:47.092 }' 00:24:47.351 [2024-07-15 19:19:27.515669] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:47.351 [2024-07-15 19:19:27.515746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408383 ] 00:24:47.351 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.351 [2024-07-15 19:19:27.575009] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.351 [2024-07-15 19:19:27.681471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.609 Running I/O for 15 seconds... 00:24:50.147 19:19:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3408098 00:24:50.147 19:19:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:50.147 [2024-07-15 19:19:30.487339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.147 [2024-07-15 19:19:30.487812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.147 [2024-07-15 19:19:30.487829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.487845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.487862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.487885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.487906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.487946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.487962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.487975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.487990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.488983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.488997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.148 [2024-07-15 19:19:30.489257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.148 [2024-07-15 19:19:30.489274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.489972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.489987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.149 [2024-07-15 19:19:30.490400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.149 [2024-07-15 19:19:30.490432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.149 [2024-07-15 19:19:30.490464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.149 [2024-07-15 19:19:30.490673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.149 [2024-07-15 19:19:30.490688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.490973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.490986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.150 [2024-07-15 19:19:30.491737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee54c0 is same with the state(5) to be set 00:24:50.150 [2024-07-15 19:19:30.491771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.150 [2024-07-15 19:19:30.491785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.150 [2024-07-15 19:19:30.491798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51464 len:8 PRP1 0x0 PRP2 0x0 00:24:50.150 [2024-07-15 19:19:30.491813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.150 [2024-07-15 19:19:30.491891] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ee54c0 was disconnected and freed. reset controller. 00:24:50.150 [2024-07-15 19:19:30.495796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.150 [2024-07-15 19:19:30.495873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.150 [2024-07-15 19:19:30.496621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.150 [2024-07-15 19:19:30.496653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.150 [2024-07-15 19:19:30.496672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.150 [2024-07-15 19:19:30.496935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.150 [2024-07-15 19:19:30.497155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.150 [2024-07-15 19:19:30.497191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.150 [2024-07-15 19:19:30.497207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.150 [2024-07-15 19:19:30.500797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.150 [2024-07-15 19:19:30.509916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.150 [2024-07-15 19:19:30.510352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.150 [2024-07-15 19:19:30.510384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.150 [2024-07-15 19:19:30.510402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.150 [2024-07-15 19:19:30.510642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.150 [2024-07-15 19:19:30.510903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.150 [2024-07-15 19:19:30.510928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.151 [2024-07-15 19:19:30.510944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.151 [2024-07-15 19:19:30.514502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.151 [2024-07-15 19:19:30.523750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.151 [2024-07-15 19:19:30.524226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.151 [2024-07-15 19:19:30.524259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.151 [2024-07-15 19:19:30.524278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.151 [2024-07-15 19:19:30.524516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.151 [2024-07-15 19:19:30.524759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.151 [2024-07-15 19:19:30.524783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.151 [2024-07-15 19:19:30.524798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.151 [2024-07-15 19:19:30.528368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.151 [2024-07-15 19:19:30.537631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.151 [2024-07-15 19:19:30.538068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.151 [2024-07-15 19:19:30.538100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.151 [2024-07-15 19:19:30.538119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.151 [2024-07-15 19:19:30.538357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.151 [2024-07-15 19:19:30.538599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.151 [2024-07-15 19:19:30.538622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.151 [2024-07-15 19:19:30.538638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.151 [2024-07-15 19:19:30.542207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.151 [2024-07-15 19:19:30.551661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.151 [2024-07-15 19:19:30.552104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.151 [2024-07-15 19:19:30.552136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.151 [2024-07-15 19:19:30.552154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.151 [2024-07-15 19:19:30.552390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.151 [2024-07-15 19:19:30.552632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.151 [2024-07-15 19:19:30.552655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.151 [2024-07-15 19:19:30.552670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.151 [2024-07-15 19:19:30.556247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.151 [2024-07-15 19:19:30.565490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.151 [2024-07-15 19:19:30.566006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.151 [2024-07-15 19:19:30.566034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.151 [2024-07-15 19:19:30.566050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.151 [2024-07-15 19:19:30.566292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.151 [2024-07-15 19:19:30.566534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.151 [2024-07-15 19:19:30.566558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.151 [2024-07-15 19:19:30.566573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.151 [2024-07-15 19:19:30.570141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.413 [2024-07-15 19:19:30.579400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.413 [2024-07-15 19:19:30.579927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.413 [2024-07-15 19:19:30.579959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.413 [2024-07-15 19:19:30.579976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.413 [2024-07-15 19:19:30.580214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.413 [2024-07-15 19:19:30.580456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.413 [2024-07-15 19:19:30.580479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.413 [2024-07-15 19:19:30.580494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.413 [2024-07-15 19:19:30.584067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.413 [2024-07-15 19:19:30.593333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.413 [2024-07-15 19:19:30.593797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.413 [2024-07-15 19:19:30.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.413 [2024-07-15 19:19:30.593845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.413 [2024-07-15 19:19:30.594095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.413 [2024-07-15 19:19:30.594337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.413 [2024-07-15 19:19:30.594361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.413 [2024-07-15 19:19:30.594376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.413 [2024-07-15 19:19:30.597944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.413 [2024-07-15 19:19:30.607209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.413 [2024-07-15 19:19:30.607663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.413 [2024-07-15 19:19:30.607694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.413 [2024-07-15 19:19:30.607718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.413 [2024-07-15 19:19:30.607968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.413 [2024-07-15 19:19:30.608211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.413 [2024-07-15 19:19:30.608235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.413 [2024-07-15 19:19:30.608251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.413 [2024-07-15 19:19:30.611804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.413 [2024-07-15 19:19:30.621052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.413 [2024-07-15 19:19:30.621453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.413 [2024-07-15 19:19:30.621484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.413 [2024-07-15 19:19:30.621503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.413 [2024-07-15 19:19:30.621740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.413 [2024-07-15 19:19:30.621992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.413 [2024-07-15 19:19:30.622016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.413 [2024-07-15 19:19:30.622032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.413 [2024-07-15 19:19:30.625593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.413 [2024-07-15 19:19:30.635055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.413 [2024-07-15 19:19:30.635483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.413 [2024-07-15 19:19:30.635514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.413 [2024-07-15 19:19:30.635533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.413 [2024-07-15 19:19:30.635770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.413 [2024-07-15 19:19:30.636020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.413 [2024-07-15 19:19:30.636045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.413 [2024-07-15 19:19:30.636060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.413 [2024-07-15 19:19:30.639610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.649067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.649528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.649560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.649578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.649815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.650067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.650097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.650113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.653673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.662920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.663367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.663398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.663416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.663653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.663923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.663948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.663963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.667521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.676780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.677217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.677249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.677267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.677504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.677746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.677769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.677784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.681357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.690630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.691096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.691138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.691155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.691412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.691654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.691678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.691693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.695266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.704528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.704992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.705025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.705042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.705280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.705522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.705546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.705561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.709128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.718385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.718892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.718924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.718942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.719179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.719421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.719444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.719459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.723031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.732302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.732759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.732790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.732808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.733058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.733301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.733325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.733340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.736924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.746213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.746665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.746708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.746728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.747000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.747243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.747267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.747282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.750838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.760093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.760509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.760542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.760560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.760797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.761048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.761073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.761088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.764646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.773928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.774355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.774387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.774404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.774641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.774892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.774916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.774931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.414 [2024-07-15 19:19:30.778489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.414 [2024-07-15 19:19:30.787961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.414 [2024-07-15 19:19:30.788428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.414 [2024-07-15 19:19:30.788459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.414 [2024-07-15 19:19:30.788477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.414 [2024-07-15 19:19:30.788714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.414 [2024-07-15 19:19:30.788966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.414 [2024-07-15 19:19:30.788990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.414 [2024-07-15 19:19:30.789016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.415 [2024-07-15 19:19:30.792575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.415 [2024-07-15 19:19:30.801813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.415 [2024-07-15 19:19:30.802248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.415 [2024-07-15 19:19:30.802280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.415 [2024-07-15 19:19:30.802298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.415 [2024-07-15 19:19:30.802535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.415 [2024-07-15 19:19:30.802777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.415 [2024-07-15 19:19:30.802800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.415 [2024-07-15 19:19:30.802815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.415 [2024-07-15 19:19:30.806380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.415 [2024-07-15 19:19:30.815829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.415 [2024-07-15 19:19:30.816278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.415 [2024-07-15 19:19:30.816310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.415 [2024-07-15 19:19:30.816327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.415 [2024-07-15 19:19:30.816565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.415 [2024-07-15 19:19:30.816806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.415 [2024-07-15 19:19:30.816830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.415 [2024-07-15 19:19:30.816844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.415 [2024-07-15 19:19:30.820408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.415 [2024-07-15 19:19:30.829472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.415 [2024-07-15 19:19:30.829839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.415 [2024-07-15 19:19:30.829867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.415 [2024-07-15 19:19:30.829893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.415 [2024-07-15 19:19:30.830108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.415 [2024-07-15 19:19:30.830325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.415 [2024-07-15 19:19:30.830347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.415 [2024-07-15 19:19:30.830360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.415 [2024-07-15 19:19:30.833492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.415 [2024-07-15 19:19:30.843066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.675 [2024-07-15 19:19:30.843559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.843588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.843604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.843859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.844093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.844114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.844144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.847183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.856359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.856848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.856885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.856903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.857142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.857357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.857376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.857389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.860359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.869586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.869972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.870001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.870016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.870251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.870449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.870468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.870480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.873491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.882905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.883385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.883427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.883444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.883686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.883911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.883932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.883945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.886923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.896156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.896574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.896618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.896634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.896897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.897102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.897122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.897135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.900103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.909350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.909787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.909829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.909845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.910091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.910306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.910326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.910338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.913306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.922529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.922925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.922953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.922969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.923197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.923410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.923430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.923446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.926428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.935827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.936327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.936355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.936371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.936611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.936825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.936844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.936871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.939830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.949064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.949534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.949561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.949576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.949797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.950039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.950061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.950074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.953039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.962319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.962764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.962806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.962823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.963074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.676 [2024-07-15 19:19:30.963291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.676 [2024-07-15 19:19:30.963310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.676 [2024-07-15 19:19:30.963323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.676 [2024-07-15 19:19:30.966289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.676 [2024-07-15 19:19:30.975514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.676 [2024-07-15 19:19:30.975945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.676 [2024-07-15 19:19:30.975978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.676 [2024-07-15 19:19:30.975995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.676 [2024-07-15 19:19:30.976236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:30.976433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:30.976452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:30.976465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:30.979439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:30.988901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:30.989313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:30.989339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:30.989354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:30.989585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:30.989783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:30.989803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:30.989815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:30.992814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.002249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.002674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.002716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.002732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.002969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.003188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.003209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.003223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.006547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.015598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.016075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.016103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.016119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.016356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.016563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.016583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.016595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.019642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.028785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.029224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.029252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.029268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.029508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.029721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.029740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.029753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.032724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.041985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.042446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.042488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.042504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.042753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.042984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.043006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.043019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.045991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.055235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.055656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.055684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.055715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.055980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.056179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.056199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.056211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.059125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.068446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.068819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.068858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.068873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.069118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.069333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.069353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.069365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.072333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.081722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.082209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.082237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.082253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.082497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.082710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.082730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.082742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.085756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.677 [2024-07-15 19:19:31.094981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.677 [2024-07-15 19:19:31.095479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.677 [2024-07-15 19:19:31.095508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.677 [2024-07-15 19:19:31.095524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.677 [2024-07-15 19:19:31.095777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.677 [2024-07-15 19:19:31.096010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.677 [2024-07-15 19:19:31.096031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.677 [2024-07-15 19:19:31.096044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.677 [2024-07-15 19:19:31.099033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.937 [2024-07-15 19:19:31.108404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.937 [2024-07-15 19:19:31.108898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.937 [2024-07-15 19:19:31.108936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.937 [2024-07-15 19:19:31.108958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.937 [2024-07-15 19:19:31.109200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.937 [2024-07-15 19:19:31.109415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.937 [2024-07-15 19:19:31.109435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.937 [2024-07-15 19:19:31.109447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.937 [2024-07-15 19:19:31.112698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.937 [2024-07-15 19:19:31.121598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.937 [2024-07-15 19:19:31.122050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.937 [2024-07-15 19:19:31.122079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.937 [2024-07-15 19:19:31.122095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.937 [2024-07-15 19:19:31.122335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.937 [2024-07-15 19:19:31.122549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.937 [2024-07-15 19:19:31.122568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.937 [2024-07-15 19:19:31.122581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.937 [2024-07-15 19:19:31.125558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.937 [2024-07-15 19:19:31.134788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.937 [2024-07-15 19:19:31.135261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.937 [2024-07-15 19:19:31.135303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.937 [2024-07-15 19:19:31.135318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.937 [2024-07-15 19:19:31.135564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.937 [2024-07-15 19:19:31.135770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.937 [2024-07-15 19:19:31.135791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.937 [2024-07-15 19:19:31.135803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.937 [2024-07-15 19:19:31.138779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.937 [2024-07-15 19:19:31.148042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.937 [2024-07-15 19:19:31.148429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.937 [2024-07-15 19:19:31.148456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.937 [2024-07-15 19:19:31.148470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.937 [2024-07-15 19:19:31.148670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.937 [2024-07-15 19:19:31.148911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.937 [2024-07-15 19:19:31.148937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.937 [2024-07-15 19:19:31.148951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.937 [2024-07-15 19:19:31.151917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.937 [2024-07-15 19:19:31.161380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.937 [2024-07-15 19:19:31.161824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.937 [2024-07-15 19:19:31.161866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.937 [2024-07-15 19:19:31.161893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.937 [2024-07-15 19:19:31.162135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.937 [2024-07-15 19:19:31.162348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.937 [2024-07-15 19:19:31.162368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.937 [2024-07-15 19:19:31.162380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.937 [2024-07-15 19:19:31.165348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.937 [2024-07-15 19:19:31.174608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.937 [2024-07-15 19:19:31.175071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.937 [2024-07-15 19:19:31.175100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.937 [2024-07-15 19:19:31.175116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.937 [2024-07-15 19:19:31.175356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.937 [2024-07-15 19:19:31.175569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.937 [2024-07-15 19:19:31.175588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.937 [2024-07-15 19:19:31.175600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.937 [2024-07-15 19:19:31.178572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.937 [2024-07-15 19:19:31.187820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.937 [2024-07-15 19:19:31.188270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.937 [2024-07-15 19:19:31.188298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.937 [2024-07-15 19:19:31.188314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.937 [2024-07-15 19:19:31.188554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.937 [2024-07-15 19:19:31.188767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.188786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.188798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.191769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.201031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.201482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.201510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.201525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.201777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.202010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.202032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.202045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.205039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.214283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.214667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.214707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.214722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.214978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.215198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.215218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.215230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.218197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.227582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.228010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.228038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.228054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.228306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.228504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.228523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.228535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.231546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.240783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.241223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.241251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.241267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.241513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.241727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.241747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.241759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.244732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.253976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.254389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.254417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.254432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.254646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.254902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.254924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.254938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.258241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.267330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.267822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.267849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.267865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.268087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.268324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.268344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.268356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.271435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.280578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.281006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.281035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.281051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.281290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.281488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.281507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.281523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.284579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.293811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.294322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.294365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.294382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.294636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.294834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.294854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.294890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.297902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.307137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.307565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.307607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.307621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.307889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.308095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.308115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.308127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.938 [2024-07-15 19:19:31.311189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.938 [2024-07-15 19:19:31.320406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.938 [2024-07-15 19:19:31.320831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.938 [2024-07-15 19:19:31.320874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.938 [2024-07-15 19:19:31.320899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.938 [2024-07-15 19:19:31.321139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.938 [2024-07-15 19:19:31.321353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.938 [2024-07-15 19:19:31.321373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.938 [2024-07-15 19:19:31.321385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.939 [2024-07-15 19:19:31.324376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.939 [2024-07-15 19:19:31.334005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.939 [2024-07-15 19:19:31.334494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.939 [2024-07-15 19:19:31.334536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.939 [2024-07-15 19:19:31.334553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.939 [2024-07-15 19:19:31.334808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.939 [2024-07-15 19:19:31.335036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.939 [2024-07-15 19:19:31.335057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.939 [2024-07-15 19:19:31.335069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.939 [2024-07-15 19:19:31.338048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.939 [2024-07-15 19:19:31.347336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.939 [2024-07-15 19:19:31.347836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.939 [2024-07-15 19:19:31.347865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.939 [2024-07-15 19:19:31.347889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.939 [2024-07-15 19:19:31.348133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.939 [2024-07-15 19:19:31.348348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.939 [2024-07-15 19:19:31.348368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.939 [2024-07-15 19:19:31.348380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.939 [2024-07-15 19:19:31.351353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.939 [2024-07-15 19:19:31.360595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.939 [2024-07-15 19:19:31.361046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.939 [2024-07-15 19:19:31.361089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:50.939 [2024-07-15 19:19:31.361106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:50.939 [2024-07-15 19:19:31.361344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:50.939 [2024-07-15 19:19:31.361542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.939 [2024-07-15 19:19:31.361562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.939 [2024-07-15 19:19:31.361574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.939 [2024-07-15 19:19:31.364709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.374154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.374609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.374643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.374673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.374928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.375133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.375153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.375181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.378140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.387408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.387881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.387910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.387927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.388179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.388378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.388397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.388409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.391376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.400619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.401073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.401102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.401118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.401359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.401572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.401591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.401603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.404576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.413818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.414271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.414313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.414328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.414593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.414792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.414811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.414827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.417815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.427090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.427577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.427625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.427641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.427888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.428107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.428127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.428139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.431118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.440382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.440828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.440856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.440897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.441142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.441357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.441376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.441388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.444360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.453598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.454010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.454044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.454061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.454303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.454516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.454535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.454547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.457516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.466939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.467393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.467426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.467443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.467682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.467924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.200 [2024-07-15 19:19:31.467945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.200 [2024-07-15 19:19:31.467958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.200 [2024-07-15 19:19:31.470927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.200 [2024-07-15 19:19:31.480185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.200 [2024-07-15 19:19:31.480624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.200 [2024-07-15 19:19:31.480664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.200 [2024-07-15 19:19:31.480680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.200 [2024-07-15 19:19:31.480940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.200 [2024-07-15 19:19:31.481145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.481165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.481178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.484151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.493401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.493859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.493894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.493911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.494140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.494373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.494393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.494405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.497434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.506678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.507150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.507192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.507207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.507475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.507713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.507735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.507748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.511070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.519964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.520484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.520512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.520528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.520781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.521028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.521049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.521063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.524117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.533226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.533608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.533648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.533663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.533916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.534122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.534141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.534168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.537131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.546536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.546981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.547008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.547040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.547291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.547489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.547508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.547521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.550501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.559722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.560148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.560191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.560206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.560456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.560654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.560673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.560685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.563682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.572925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.573449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.573477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.573493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.573747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.573974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.573995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.574007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.576976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.586218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.586668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.586695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.586726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.586980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.587239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.587258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.587270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.590240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.599461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.599953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.599982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.600002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.600247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.600461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.600481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.600493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.603468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.612697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.613093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.613120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.613136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.613352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.613564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.201 [2024-07-15 19:19:31.613584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.201 [2024-07-15 19:19:31.613596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.201 [2024-07-15 19:19:31.616576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.201 [2024-07-15 19:19:31.626165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.201 [2024-07-15 19:19:31.626607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.201 [2024-07-15 19:19:31.626635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.201 [2024-07-15 19:19:31.626651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.201 [2024-07-15 19:19:31.626917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.201 [2024-07-15 19:19:31.627149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.202 [2024-07-15 19:19:31.627170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.202 [2024-07-15 19:19:31.627184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.630476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.639499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.640050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.640093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.463 [2024-07-15 19:19:31.640110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.463 [2024-07-15 19:19:31.640343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.463 [2024-07-15 19:19:31.640541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.463 [2024-07-15 19:19:31.640565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.463 [2024-07-15 19:19:31.640578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.643546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.652763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.653159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.653200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.463 [2024-07-15 19:19:31.653216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.463 [2024-07-15 19:19:31.653430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.463 [2024-07-15 19:19:31.653628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.463 [2024-07-15 19:19:31.653648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.463 [2024-07-15 19:19:31.653660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.656622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.666037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.666554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.666582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.463 [2024-07-15 19:19:31.666598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.463 [2024-07-15 19:19:31.666850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.463 [2024-07-15 19:19:31.667078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.463 [2024-07-15 19:19:31.667099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.463 [2024-07-15 19:19:31.667112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.670077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.679322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.679731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.679760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.463 [2024-07-15 19:19:31.679776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.463 [2024-07-15 19:19:31.680014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.463 [2024-07-15 19:19:31.680240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.463 [2024-07-15 19:19:31.680275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.463 [2024-07-15 19:19:31.680287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.683254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.692502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.692929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.692971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.463 [2024-07-15 19:19:31.692988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.463 [2024-07-15 19:19:31.693241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.463 [2024-07-15 19:19:31.693439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.463 [2024-07-15 19:19:31.693458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.463 [2024-07-15 19:19:31.693470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.696486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.706404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.706830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.706862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.463 [2024-07-15 19:19:31.706890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.463 [2024-07-15 19:19:31.707131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.463 [2024-07-15 19:19:31.707384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.463 [2024-07-15 19:19:31.707407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.463 [2024-07-15 19:19:31.707422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.710992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.720246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.720698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.720729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.463 [2024-07-15 19:19:31.720747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.463 [2024-07-15 19:19:31.720995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.463 [2024-07-15 19:19:31.721237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.463 [2024-07-15 19:19:31.721261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.463 [2024-07-15 19:19:31.721276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.463 [2024-07-15 19:19:31.724853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.463 [2024-07-15 19:19:31.734113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.463 [2024-07-15 19:19:31.734541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.463 [2024-07-15 19:19:31.734572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.734590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.734834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.735088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.735112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.735127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.738698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.747950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.748386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.748417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.748435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.748673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.748927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.748952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.748967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.752528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.761785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.762218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.762249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.762267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.762505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.762747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.762771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.762785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.766349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.775799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.776245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.776276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.776294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.776531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.776773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.776796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.776817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.780381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.789836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.790281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.790313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.790331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.790569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.790810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.790834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.790849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.794411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.803658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.804126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.804158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.804176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.804413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.804653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.804677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.804692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.808266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.817519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.817970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.818002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.818020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.818257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.818499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.818522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.818537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.822112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.831393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.831852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.831893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.831917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.832155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.832397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.832421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.832436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.836005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.845282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.845731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.845762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.845780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.846026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.846268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.846292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.846307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.849861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.859107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.859538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.859569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.859587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.859824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.860073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.860098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.860113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.863671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.873127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.873577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.873608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.464 [2024-07-15 19:19:31.873626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.464 [2024-07-15 19:19:31.873863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.464 [2024-07-15 19:19:31.874120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.464 [2024-07-15 19:19:31.874144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.464 [2024-07-15 19:19:31.874159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.464 [2024-07-15 19:19:31.877710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.464 [2024-07-15 19:19:31.886984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.464 [2024-07-15 19:19:31.887436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.464 [2024-07-15 19:19:31.887468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.465 [2024-07-15 19:19:31.887485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.465 [2024-07-15 19:19:31.887760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.465 [2024-07-15 19:19:31.888029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.465 [2024-07-15 19:19:31.888054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.465 [2024-07-15 19:19:31.888069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.465 [2024-07-15 19:19:31.891605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.726 [2024-07-15 19:19:31.900871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.726 [2024-07-15 19:19:31.901332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.901364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.901382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.901619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.901861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.901895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.901911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:31.905480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:31.914734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:31.915188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.915219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.915237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.915474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.915715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.915739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.915754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:31.919330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:31.928579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:31.929000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.929032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.929049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.929287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.929529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.929552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.929567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:31.933136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:31.942611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:31.943148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.943200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.943218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.943455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.943696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.943720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.943734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:31.947302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:31.956543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:31.956970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.957002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.957020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.957257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.957499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.957523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.957538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:31.961109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:31.970561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:31.970998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.971035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.971053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.971291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.971532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.971555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.971570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:31.975141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:31.984387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:31.984814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.984844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.984862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.985110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.985352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.985376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.985391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:31.988970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:31.998214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:31.998665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:31.998696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:31.998714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:31.998963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:31.999206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:31.999230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:31.999245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:32.002803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:32.012052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:32.012453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:32.012484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:32.012502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:32.012740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:32.012998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:32.013023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:32.013038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:32.016596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:32.026057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:32.026539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:32.026566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:32.026597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:32.026853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:32.027090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:32.027111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:32.027123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:32.030696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:32.039951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:32.040397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:32.040428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:32.040446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.727 [2024-07-15 19:19:32.040683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.727 [2024-07-15 19:19:32.040938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.727 [2024-07-15 19:19:32.040963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.727 [2024-07-15 19:19:32.040978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.727 [2024-07-15 19:19:32.044536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.727 [2024-07-15 19:19:32.053886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.727 [2024-07-15 19:19:32.054337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.727 [2024-07-15 19:19:32.054369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.727 [2024-07-15 19:19:32.054387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.054623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.054865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.054900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.054917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.058480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.728 [2024-07-15 19:19:32.067732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.728 [2024-07-15 19:19:32.068173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.728 [2024-07-15 19:19:32.068204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.728 [2024-07-15 19:19:32.068222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.068459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.068700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.068723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.068738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.072306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.728 [2024-07-15 19:19:32.081752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.728 [2024-07-15 19:19:32.082185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.728 [2024-07-15 19:19:32.082217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.728 [2024-07-15 19:19:32.082235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.082472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.082713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.082737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.082752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.086320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.728 [2024-07-15 19:19:32.095577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.728 [2024-07-15 19:19:32.096019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.728 [2024-07-15 19:19:32.096051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.728 [2024-07-15 19:19:32.096069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.096306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.096547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.096571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.096586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.100156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.728 [2024-07-15 19:19:32.109613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.728 [2024-07-15 19:19:32.110044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.728 [2024-07-15 19:19:32.110076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.728 [2024-07-15 19:19:32.110100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.110339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.110580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.110604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.110619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.114192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.728 [2024-07-15 19:19:32.123439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.728 [2024-07-15 19:19:32.123890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.728 [2024-07-15 19:19:32.123921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.728 [2024-07-15 19:19:32.123939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.124176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.124418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.124441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.124456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.128028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.728 [2024-07-15 19:19:32.137280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.728 [2024-07-15 19:19:32.137707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.728 [2024-07-15 19:19:32.137739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.728 [2024-07-15 19:19:32.137758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.138008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.138252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.138275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.138290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.141850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.728 [2024-07-15 19:19:32.151346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.728 [2024-07-15 19:19:32.151818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.728 [2024-07-15 19:19:32.151845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.728 [2024-07-15 19:19:32.151860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.728 [2024-07-15 19:19:32.152145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.728 [2024-07-15 19:19:32.152386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.728 [2024-07-15 19:19:32.152415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.728 [2024-07-15 19:19:32.152431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.728 [2024-07-15 19:19:32.156005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.989 [2024-07-15 19:19:32.165262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.989 [2024-07-15 19:19:32.165711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.989 [2024-07-15 19:19:32.165741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.989 [2024-07-15 19:19:32.165759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.166010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.166252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.166276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.166290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.169848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.179111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.179535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.179566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.179584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.179821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.180076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.180101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.180116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.183675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.192944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.193401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.193433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.193451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.193688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.193944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.193968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.193983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.197540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.206779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.207245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.207276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.207294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.207531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.207773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.207797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.207811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.211379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.220630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.221084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.221116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.221133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.221370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.221612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.221636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.221651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.225235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.234485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.234893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.234925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.234943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.235180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.235422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.235445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.235460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.239036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.248489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.248917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.248949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.248967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.249210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.249452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.249475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.249490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.253059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.262508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.262936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.262968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.262986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.263223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.263465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.263488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.263503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.267068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.276520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.276966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.276998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.277016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.277254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.277495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.277519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.277534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.281101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.290356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.290805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.290835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.290853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.291101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.291343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.291367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.291387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.294957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.304199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.304665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.304697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.304715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.990 [2024-07-15 19:19:32.304964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.990 [2024-07-15 19:19:32.305207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.990 [2024-07-15 19:19:32.305231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.990 [2024-07-15 19:19:32.305246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.990 [2024-07-15 19:19:32.308804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.990 [2024-07-15 19:19:32.318054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.990 [2024-07-15 19:19:32.318503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.990 [2024-07-15 19:19:32.318535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.990 [2024-07-15 19:19:32.318553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.318792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.319046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.319071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.319086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.322638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.991 [2024-07-15 19:19:32.331891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.991 [2024-07-15 19:19:32.332341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.991 [2024-07-15 19:19:32.332373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.991 [2024-07-15 19:19:32.332390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.332627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.332869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.332905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.332920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.336479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.991 [2024-07-15 19:19:32.345732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.991 [2024-07-15 19:19:32.346175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.991 [2024-07-15 19:19:32.346211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.991 [2024-07-15 19:19:32.346230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.346467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.346709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.346732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.346747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.350312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.991 [2024-07-15 19:19:32.359757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.991 [2024-07-15 19:19:32.360184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.991 [2024-07-15 19:19:32.360215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.991 [2024-07-15 19:19:32.360233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.360470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.360712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.360736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.360751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.364321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.991 [2024-07-15 19:19:32.373580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.991 [2024-07-15 19:19:32.374090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.991 [2024-07-15 19:19:32.374119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.991 [2024-07-15 19:19:32.374135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.374390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.374632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.374655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.374670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.378243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.991 [2024-07-15 19:19:32.387494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.991 [2024-07-15 19:19:32.388011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.991 [2024-07-15 19:19:32.388038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.991 [2024-07-15 19:19:32.388054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.388307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.388558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.388581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.388597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.392165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.991 [2024-07-15 19:19:32.401410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.991 [2024-07-15 19:19:32.401872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.991 [2024-07-15 19:19:32.401912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.991 [2024-07-15 19:19:32.401931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.402168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.402410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.402433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.402448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.406019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.991 [2024-07-15 19:19:32.415266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.991 [2024-07-15 19:19:32.415714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.991 [2024-07-15 19:19:32.415744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:51.991 [2024-07-15 19:19:32.415762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:51.991 [2024-07-15 19:19:32.416013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:51.991 [2024-07-15 19:19:32.416255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.991 [2024-07-15 19:19:32.416278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.991 [2024-07-15 19:19:32.416294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.991 [2024-07-15 19:19:32.419857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.253 [2024-07-15 19:19:32.429124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.253 [2024-07-15 19:19:32.429582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.253 [2024-07-15 19:19:32.429613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.253 [2024-07-15 19:19:32.429631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.253 [2024-07-15 19:19:32.429868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.253 [2024-07-15 19:19:32.430123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.253 [2024-07-15 19:19:32.430147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.253 [2024-07-15 19:19:32.430162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.253 [2024-07-15 19:19:32.433726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.253 [2024-07-15 19:19:32.442992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.253 [2024-07-15 19:19:32.443438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.253 [2024-07-15 19:19:32.443469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.253 [2024-07-15 19:19:32.443486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.253 [2024-07-15 19:19:32.443723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.253 [2024-07-15 19:19:32.443977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.253 [2024-07-15 19:19:32.444002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.253 [2024-07-15 19:19:32.444017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.253 [2024-07-15 19:19:32.447574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.253 [2024-07-15 19:19:32.456871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.253 [2024-07-15 19:19:32.457344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.253 [2024-07-15 19:19:32.457376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.253 [2024-07-15 19:19:32.457394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.253 [2024-07-15 19:19:32.457631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.253 [2024-07-15 19:19:32.457873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.253 [2024-07-15 19:19:32.457908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.253 [2024-07-15 19:19:32.457924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.253 [2024-07-15 19:19:32.461484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.253 [2024-07-15 19:19:32.470746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.253 [2024-07-15 19:19:32.471184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.253 [2024-07-15 19:19:32.471217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.253 [2024-07-15 19:19:32.471234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.253 [2024-07-15 19:19:32.471471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.253 [2024-07-15 19:19:32.471713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.253 [2024-07-15 19:19:32.471736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.253 [2024-07-15 19:19:32.471751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.253 [2024-07-15 19:19:32.475325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.253 [2024-07-15 19:19:32.484605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.253 [2024-07-15 19:19:32.485043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.253 [2024-07-15 19:19:32.485079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.253 [2024-07-15 19:19:32.485103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.253 [2024-07-15 19:19:32.485356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.253 [2024-07-15 19:19:32.485599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.253 [2024-07-15 19:19:32.485622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.253 [2024-07-15 19:19:32.485637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.253 [2024-07-15 19:19:32.489201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.253 [2024-07-15 19:19:32.498638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.253 [2024-07-15 19:19:32.499098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.253 [2024-07-15 19:19:32.499130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.253 [2024-07-15 19:19:32.499148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.253 [2024-07-15 19:19:32.499386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.253 [2024-07-15 19:19:32.499627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.253 [2024-07-15 19:19:32.499660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.253 [2024-07-15 19:19:32.499675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.253 [2024-07-15 19:19:32.503241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.253 [2024-07-15 19:19:32.512488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.253 [2024-07-15 19:19:32.513008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.253 [2024-07-15 19:19:32.513040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.253 [2024-07-15 19:19:32.513058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.253 [2024-07-15 19:19:32.513295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.253 [2024-07-15 19:19:32.513538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.253 [2024-07-15 19:19:32.513561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.253 [2024-07-15 19:19:32.513576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.253 [2024-07-15 19:19:32.517150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.526406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.526867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.526905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.526924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.527170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.527412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.527441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.527457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.531020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.540255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.540788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.540841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.540859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.541105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.541347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.541371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.541386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.544954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.554200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.554758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.554811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.554829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.555076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.555318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.555341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.555356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.558923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.568200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.568643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.568675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.568693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.568947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.569191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.569214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.569229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.572786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.582049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.582545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.582594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.582612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.582849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.583099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.583121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.583134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.586742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.596009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.596468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.596521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.596539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.596776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.597027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.597052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.597068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.600623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.609869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.610342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.610390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.610407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.610645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.610895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.610919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.610934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.614487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.623729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.624162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.624193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.624216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.624454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.624696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.624720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.624734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.628308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.637551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.638001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.638033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.638051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.638289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.638531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.638555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.638569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.642139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.651379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.651835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.651866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.651896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.652135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.652376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.652400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.652415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.655983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.254 [2024-07-15 19:19:32.665226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.254 [2024-07-15 19:19:32.665684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.254 [2024-07-15 19:19:32.665715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.254 [2024-07-15 19:19:32.665732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.254 [2024-07-15 19:19:32.665981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.254 [2024-07-15 19:19:32.666223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.254 [2024-07-15 19:19:32.666252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.254 [2024-07-15 19:19:32.666268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.254 [2024-07-15 19:19:32.669830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.255 [2024-07-15 19:19:32.679084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.255 [2024-07-15 19:19:32.679539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.255 [2024-07-15 19:19:32.679570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.255 [2024-07-15 19:19:32.679587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.255 [2024-07-15 19:19:32.679824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.255 [2024-07-15 19:19:32.680079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.255 [2024-07-15 19:19:32.680104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.255 [2024-07-15 19:19:32.680119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.515 [2024-07-15 19:19:32.683682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.515 [2024-07-15 19:19:32.692977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.515 [2024-07-15 19:19:32.693439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.515 [2024-07-15 19:19:32.693470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.515 [2024-07-15 19:19:32.693488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.515 [2024-07-15 19:19:32.693726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.515 [2024-07-15 19:19:32.693981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.515 [2024-07-15 19:19:32.694006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.515 [2024-07-15 19:19:32.694021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.515 [2024-07-15 19:19:32.697574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.515 [2024-07-15 19:19:32.706818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.515 [2024-07-15 19:19:32.707247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.515 [2024-07-15 19:19:32.707279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.515 [2024-07-15 19:19:32.707297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.515 [2024-07-15 19:19:32.707534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.515 [2024-07-15 19:19:32.707776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.515 [2024-07-15 19:19:32.707800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.515 [2024-07-15 19:19:32.707815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.515 [2024-07-15 19:19:32.711124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.515 [2024-07-15 19:19:32.720376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.515 [2024-07-15 19:19:32.720826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.515 [2024-07-15 19:19:32.720857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.515 [2024-07-15 19:19:32.720874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.515 [2024-07-15 19:19:32.721119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.515 [2024-07-15 19:19:32.721372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.515 [2024-07-15 19:19:32.721396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.515 [2024-07-15 19:19:32.721412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.515 [2024-07-15 19:19:32.725009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.515 [2024-07-15 19:19:32.734256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.515 [2024-07-15 19:19:32.734697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.515 [2024-07-15 19:19:32.734745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.515 [2024-07-15 19:19:32.734762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.515 [2024-07-15 19:19:32.735031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.515 [2024-07-15 19:19:32.735272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.515 [2024-07-15 19:19:32.735296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.515 [2024-07-15 19:19:32.735311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.515 [2024-07-15 19:19:32.738857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.515 [2024-07-15 19:19:32.748253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.515 [2024-07-15 19:19:32.748723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.515 [2024-07-15 19:19:32.748750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.515 [2024-07-15 19:19:32.748765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.515 [2024-07-15 19:19:32.749030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.515 [2024-07-15 19:19:32.749264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.515 [2024-07-15 19:19:32.749288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.515 [2024-07-15 19:19:32.749303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.515 [2024-07-15 19:19:32.752830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.515 [2024-07-15 19:19:32.762230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.515 [2024-07-15 19:19:32.762673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.762722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.762740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.763003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.763247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.763270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.763285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.766848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.776108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.776533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.776564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.776582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.776819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.777070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.777094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.777110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.780672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.790150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.790611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.790642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.790660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.790909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.791152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.791175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.791190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.794744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.803988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.804438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.804469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.804486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.804724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.804977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.805001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.805022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.808581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.817819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.818268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.818299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.818317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.818555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.818796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.818820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.818834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.822397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.831849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.832305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.832336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.832354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.832590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.832832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.832855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.832870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.836440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.845686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.846118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.846149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.846167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.846404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.846645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.846669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.846684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.850246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.859691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.860116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.860153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.860171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.860408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.860650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.860673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.860688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.864252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.873696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.874125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.874152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.874166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.874395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.874637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.874660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.874675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.878238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.887708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.888138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.888169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.888187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.888432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.888676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.888700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.888715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.892283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.901727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.902172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.902204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.516 [2024-07-15 19:19:32.902222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.516 [2024-07-15 19:19:32.902459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.516 [2024-07-15 19:19:32.902707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.516 [2024-07-15 19:19:32.902730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.516 [2024-07-15 19:19:32.902746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.516 [2024-07-15 19:19:32.906314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.516 [2024-07-15 19:19:32.915557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.516 [2024-07-15 19:19:32.916024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.516 [2024-07-15 19:19:32.916052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.517 [2024-07-15 19:19:32.916069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.517 [2024-07-15 19:19:32.916321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.517 [2024-07-15 19:19:32.916563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.517 [2024-07-15 19:19:32.916586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.517 [2024-07-15 19:19:32.916601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.517 [2024-07-15 19:19:32.920165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.517 [2024-07-15 19:19:32.929409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.517 [2024-07-15 19:19:32.929848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.517 [2024-07-15 19:19:32.929874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.517 [2024-07-15 19:19:32.929914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.517 [2024-07-15 19:19:32.930174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.517 [2024-07-15 19:19:32.930416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.517 [2024-07-15 19:19:32.930440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.517 [2024-07-15 19:19:32.930455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.517 [2024-07-15 19:19:32.934013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.517 [2024-07-15 19:19:32.943237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.517 [2024-07-15 19:19:32.943811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.517 [2024-07-15 19:19:32.943864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.517 [2024-07-15 19:19:32.943889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.517 [2024-07-15 19:19:32.944128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.517 [2024-07-15 19:19:32.944371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.517 [2024-07-15 19:19:32.944390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.517 [2024-07-15 19:19:32.944403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.776 [2024-07-15 19:19:32.947815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.776 [2024-07-15 19:19:32.957191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.776 [2024-07-15 19:19:32.957648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.776 [2024-07-15 19:19:32.957678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.776 [2024-07-15 19:19:32.957696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.776 [2024-07-15 19:19:32.957945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:32.958187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:32.958211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:32.958226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:32.961798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:32.971042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:32.971476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:32.971507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:32.971525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:32.971762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:32.972012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:32.972037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:32.972052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:32.975608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:32.984964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:32.985406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:32.985437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:32.985455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:32.985692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:32.985952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:32.985973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:32.985986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:32.989472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:32.998754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:32.999246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:32.999277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:32.999301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:32.999539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:32.999782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:32.999805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:32.999820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.003313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.012564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.013021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.013050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.013066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.013324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:33.013566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:33.013590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:33.013605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.017241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.026482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.026980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.027009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.027025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.027262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:33.027503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:33.027527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:33.027542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.031098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.040284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.040731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.040762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.040780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.041045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:33.041263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:33.041287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:33.041300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.044668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.054238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.054745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.054787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.054803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.055090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:33.055343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:33.055367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:33.055382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.058926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.068120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.068565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.068596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.068614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.068851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:33.069103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:33.069128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:33.069143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.072699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.082016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.082447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.082478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.082496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.082733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:33.082987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:33.083012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:33.083027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.086582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.095843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.096299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.096330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.096348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.096585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.777 [2024-07-15 19:19:33.096827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.777 [2024-07-15 19:19:33.096851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.777 [2024-07-15 19:19:33.096865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.777 [2024-07-15 19:19:33.100432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.777 [2024-07-15 19:19:33.109679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.777 [2024-07-15 19:19:33.110146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.777 [2024-07-15 19:19:33.110189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.777 [2024-07-15 19:19:33.110203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.777 [2024-07-15 19:19:33.110455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.778 [2024-07-15 19:19:33.110697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.778 [2024-07-15 19:19:33.110721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.778 [2024-07-15 19:19:33.110735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.778 [2024-07-15 19:19:33.114301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.778 [2024-07-15 19:19:33.123540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.778 [2024-07-15 19:19:33.123974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.778 [2024-07-15 19:19:33.124005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.778 [2024-07-15 19:19:33.124023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.778 [2024-07-15 19:19:33.124260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.778 [2024-07-15 19:19:33.124503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.778 [2024-07-15 19:19:33.124526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.778 [2024-07-15 19:19:33.124541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.778 [2024-07-15 19:19:33.128112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.778 [2024-07-15 19:19:33.137555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.778 [2024-07-15 19:19:33.138024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.778 [2024-07-15 19:19:33.138055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.778 [2024-07-15 19:19:33.138073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.778 [2024-07-15 19:19:33.138316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.778 [2024-07-15 19:19:33.138557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.778 [2024-07-15 19:19:33.138587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.778 [2024-07-15 19:19:33.138612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.778 [2024-07-15 19:19:33.142181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.778 [2024-07-15 19:19:33.151415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.778 [2024-07-15 19:19:33.151861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.778 [2024-07-15 19:19:33.151899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.778 [2024-07-15 19:19:33.151918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.778 [2024-07-15 19:19:33.152155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.778 [2024-07-15 19:19:33.152397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.778 [2024-07-15 19:19:33.152420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.778 [2024-07-15 19:19:33.152435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.778 [2024-07-15 19:19:33.156001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.778 [2024-07-15 19:19:33.165234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.778 [2024-07-15 19:19:33.165682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.778 [2024-07-15 19:19:33.165709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.778 [2024-07-15 19:19:33.165724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.778 [2024-07-15 19:19:33.165983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.778 [2024-07-15 19:19:33.166226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.778 [2024-07-15 19:19:33.166250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.778 [2024-07-15 19:19:33.166264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.778 [2024-07-15 19:19:33.169820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.778 [2024-07-15 19:19:33.179057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.778 [2024-07-15 19:19:33.179512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.778 [2024-07-15 19:19:33.179542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.778 [2024-07-15 19:19:33.179560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.778 [2024-07-15 19:19:33.179797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.778 [2024-07-15 19:19:33.180050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.778 [2024-07-15 19:19:33.180074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.778 [2024-07-15 19:19:33.180094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.778 [2024-07-15 19:19:33.183651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.778 [2024-07-15 19:19:33.192918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.778 [2024-07-15 19:19:33.193354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.778 [2024-07-15 19:19:33.193385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:52.778 [2024-07-15 19:19:33.193403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:52.778 [2024-07-15 19:19:33.193640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:52.778 [2024-07-15 19:19:33.193903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.778 [2024-07-15 19:19:33.193928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.778 [2024-07-15 19:19:33.193943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.778 [2024-07-15 19:19:33.197499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.778 [2024-07-15 19:19:33.206760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.051 [2024-07-15 19:19:33.207222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.051 [2024-07-15 19:19:33.207264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.051 [2024-07-15 19:19:33.207280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.051 [2024-07-15 19:19:33.207554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.051 [2024-07-15 19:19:33.207796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.051 [2024-07-15 19:19:33.207819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.051 [2024-07-15 19:19:33.207835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.051 [2024-07-15 19:19:33.211403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.051 [2024-07-15 19:19:33.220655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.051 [2024-07-15 19:19:33.221110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.051 [2024-07-15 19:19:33.221141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.051 [2024-07-15 19:19:33.221159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.051 [2024-07-15 19:19:33.221396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.051 [2024-07-15 19:19:33.221638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.051 [2024-07-15 19:19:33.221662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.051 [2024-07-15 19:19:33.221677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.051 [2024-07-15 19:19:33.225249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.051 [2024-07-15 19:19:33.234489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.051 [2024-07-15 19:19:33.234947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.051 [2024-07-15 19:19:33.234979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.051 [2024-07-15 19:19:33.234997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.051 [2024-07-15 19:19:33.235233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.051 [2024-07-15 19:19:33.235475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.051 [2024-07-15 19:19:33.235498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.051 [2024-07-15 19:19:33.235513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.051 [2024-07-15 19:19:33.239088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.051 [2024-07-15 19:19:33.248330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.051 [2024-07-15 19:19:33.248767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.052 [2024-07-15 19:19:33.248809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.052 [2024-07-15 19:19:33.248824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.052 [2024-07-15 19:19:33.249096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.052 [2024-07-15 19:19:33.249339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.052 [2024-07-15 19:19:33.249362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.052 [2024-07-15 19:19:33.249377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.052 [2024-07-15 19:19:33.252940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.052 [2024-07-15 19:19:33.262180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.052 [2024-07-15 19:19:33.262627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.052 [2024-07-15 19:19:33.262654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.052 [2024-07-15 19:19:33.262670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.052 [2024-07-15 19:19:33.262928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.052 [2024-07-15 19:19:33.263170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.052 [2024-07-15 19:19:33.263194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.052 [2024-07-15 19:19:33.263209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.052 [2024-07-15 19:19:33.266766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.052 [2024-07-15 19:19:33.276018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.052 [2024-07-15 19:19:33.276539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.052 [2024-07-15 19:19:33.276589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.052 [2024-07-15 19:19:33.276606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.052 [2024-07-15 19:19:33.276848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.052 [2024-07-15 19:19:33.277098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.052 [2024-07-15 19:19:33.277123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.052 [2024-07-15 19:19:33.277138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.052 [2024-07-15 19:19:33.280693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.052 [2024-07-15 19:19:33.289950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.052 [2024-07-15 19:19:33.290421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.052 [2024-07-15 19:19:33.290467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.052 [2024-07-15 19:19:33.290485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.052 [2024-07-15 19:19:33.290721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.052 [2024-07-15 19:19:33.290972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.052 [2024-07-15 19:19:33.290997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.052 [2024-07-15 19:19:33.291012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.052 [2024-07-15 19:19:33.294572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.052 [2024-07-15 19:19:33.303806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.052 [2024-07-15 19:19:33.304236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.052 [2024-07-15 19:19:33.304268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.052 [2024-07-15 19:19:33.304286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.052 [2024-07-15 19:19:33.304524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.052 [2024-07-15 19:19:33.304766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.052 [2024-07-15 19:19:33.304790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.052 [2024-07-15 19:19:33.304804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.052 [2024-07-15 19:19:33.308369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.052 [2024-07-15 19:19:33.317815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.052 [2024-07-15 19:19:33.318261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.052 [2024-07-15 19:19:33.318293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.052 [2024-07-15 19:19:33.318311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.052 [2024-07-15 19:19:33.318549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.052 [2024-07-15 19:19:33.318790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.052 [2024-07-15 19:19:33.318814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.052 [2024-07-15 19:19:33.318834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.322401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.331642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.332101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.332132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.332150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.332387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.332629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.332652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.332667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.336235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.345485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.345955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.345988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.346006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.346244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.346486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.346509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.346524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.350092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.359327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.359799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.359826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.359842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.360112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.360355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.360378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.360393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.363960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.373195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.373641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.373677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.373695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.373944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.374187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.374211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.374226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.377779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.387027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.387510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.387536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.387567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.387825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.388076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.388101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.388116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.391685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.400955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.401412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.401443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.401461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.401698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.401950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.401975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.401989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.405546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.414779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.415214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.415246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.415264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.415501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.415749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.415773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.415787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.419351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.428792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.429226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.429257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.429274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.429511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.429753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.429777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.429791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.433355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.442817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.443254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.443286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.443304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.443541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.443782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.443806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.443821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.447384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.456830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.457283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.457314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.457331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.457568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.457810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.457833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.457848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.053 [2024-07-15 19:19:33.461429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.053 [2024-07-15 19:19:33.470683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.053 [2024-07-15 19:19:33.471117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.053 [2024-07-15 19:19:33.471149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.053 [2024-07-15 19:19:33.471166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.053 [2024-07-15 19:19:33.471403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.053 [2024-07-15 19:19:33.471645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.053 [2024-07-15 19:19:33.471668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.053 [2024-07-15 19:19:33.471683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.316 [2024-07-15 19:19:33.475258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3408098 Killed "${NVMF_APP[@]}" "$@" 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.316 [2024-07-15 19:19:33.484713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.316 [2024-07-15 19:19:33.485179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.316 [2024-07-15 19:19:33.485211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.316 [2024-07-15 19:19:33.485228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3409052 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:53.316 [2024-07-15 19:19:33.485466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3409052 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3409052 ']' 00:24:53.316 [2024-07-15 19:19:33.485708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.316 [2024-07-15 19:19:33.485733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.316 [2024-07-15 19:19:33.485748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.316 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.316 [2024-07-15 19:19:33.489341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.316 [2024-07-15 19:19:33.498596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.316 [2024-07-15 19:19:33.499015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.316 [2024-07-15 19:19:33.499048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.316 [2024-07-15 19:19:33.499066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.316 [2024-07-15 19:19:33.499304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.316 [2024-07-15 19:19:33.499544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.316 [2024-07-15 19:19:33.499567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.316 [2024-07-15 19:19:33.499582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.316 [2024-07-15 19:19:33.503148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.316 [2024-07-15 19:19:33.512619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.316 [2024-07-15 19:19:33.513044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.316 [2024-07-15 19:19:33.513076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.316 [2024-07-15 19:19:33.513093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.316 [2024-07-15 19:19:33.513330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.316 [2024-07-15 19:19:33.513571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.316 [2024-07-15 19:19:33.513595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.316 [2024-07-15 19:19:33.513610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.316 [2024-07-15 19:19:33.517179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.316 [2024-07-15 19:19:33.526459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.316 [2024-07-15 19:19:33.526887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.316 [2024-07-15 19:19:33.526918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.316 [2024-07-15 19:19:33.526936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.316 [2024-07-15 19:19:33.527174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.316 [2024-07-15 19:19:33.527416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.316 [2024-07-15 19:19:33.527439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.316 [2024-07-15 19:19:33.527453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.316 [2024-07-15 19:19:33.528902] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:24:53.316 [2024-07-15 19:19:33.528976] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.316 [2024-07-15 19:19:33.531018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.316 [2024-07-15 19:19:33.540484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.316 [2024-07-15 19:19:33.540931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.316 [2024-07-15 19:19:33.540964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.316 [2024-07-15 19:19:33.540982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.316 [2024-07-15 19:19:33.541220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.316 [2024-07-15 19:19:33.541462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.316 [2024-07-15 19:19:33.541486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.316 [2024-07-15 19:19:33.541501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.316 [2024-07-15 19:19:33.545064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.316 [2024-07-15 19:19:33.554511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.554975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.555007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.555025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.317 [2024-07-15 19:19:33.555262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.317 [2024-07-15 19:19:33.555503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.317 [2024-07-15 19:19:33.555527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.317 [2024-07-15 19:19:33.555542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.317 [2024-07-15 19:19:33.559310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.317 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.317 [2024-07-15 19:19:33.568361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.568820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.568852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.568870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.317 [2024-07-15 19:19:33.569117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.317 [2024-07-15 19:19:33.569359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.317 [2024-07-15 19:19:33.569383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.317 [2024-07-15 19:19:33.569398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.317 [2024-07-15 19:19:33.572958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.317 [2024-07-15 19:19:33.581871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.582313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.582339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.582370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.317 [2024-07-15 19:19:33.582615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.317 [2024-07-15 19:19:33.582819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.317 [2024-07-15 19:19:33.582839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.317 [2024-07-15 19:19:33.582852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.317 [2024-07-15 19:19:33.585968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.317 [2024-07-15 19:19:33.595172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.595618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.595646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.595662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.317 [2024-07-15 19:19:33.595898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.317 [2024-07-15 19:19:33.596080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:53.317 [2024-07-15 19:19:33.596111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.317 [2024-07-15 19:19:33.596130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.317 [2024-07-15 19:19:33.596143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.317 [2024-07-15 19:19:33.599247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.317 [2024-07-15 19:19:33.608455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.609124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.609164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.609183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.317 [2024-07-15 19:19:33.609441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.317 [2024-07-15 19:19:33.609650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.317 [2024-07-15 19:19:33.609671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.317 [2024-07-15 19:19:33.609686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.317 [2024-07-15 19:19:33.612760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.317 [2024-07-15 19:19:33.622007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.622499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.622528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.622545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.317 [2024-07-15 19:19:33.622786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.317 [2024-07-15 19:19:33.623021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.317 [2024-07-15 19:19:33.623050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.317 [2024-07-15 19:19:33.623065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.317 [2024-07-15 19:19:33.626139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.317 [2024-07-15 19:19:33.635486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.635957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.635985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.636002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.317 [2024-07-15 19:19:33.636215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.317 [2024-07-15 19:19:33.636453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.317 [2024-07-15 19:19:33.636473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.317 [2024-07-15 19:19:33.636486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.317 [2024-07-15 19:19:33.639645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.317 [2024-07-15 19:19:33.648817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.317 [2024-07-15 19:19:33.649279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.317 [2024-07-15 19:19:33.649308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.317 [2024-07-15 19:19:33.649325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.649570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.649774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.649795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.649808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.318 [2024-07-15 19:19:33.652875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.318 [2024-07-15 19:19:33.662103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.318 [2024-07-15 19:19:33.662762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.318 [2024-07-15 19:19:33.662811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.318 [2024-07-15 19:19:33.662830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.663060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.663309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.663330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.663346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.318 [2024-07-15 19:19:33.666412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.318 [2024-07-15 19:19:33.675432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.318 [2024-07-15 19:19:33.675974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.318 [2024-07-15 19:19:33.676003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.318 [2024-07-15 19:19:33.676020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.676248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.676472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.676492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.676505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.318 [2024-07-15 19:19:33.679538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.318 [2024-07-15 19:19:33.688747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.318 [2024-07-15 19:19:33.689193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.318 [2024-07-15 19:19:33.689223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.318 [2024-07-15 19:19:33.689239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.689492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.689698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.689720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.689733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.318 [2024-07-15 19:19:33.692791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.318 [2024-07-15 19:19:33.702172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.318 [2024-07-15 19:19:33.702698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.318 [2024-07-15 19:19:33.702728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.318 [2024-07-15 19:19:33.702745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.702979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.703204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.703225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.703238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.318 [2024-07-15 19:19:33.706286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.318 [2024-07-15 19:19:33.707471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.318 [2024-07-15 19:19:33.707503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.318 [2024-07-15 19:19:33.707516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.318 [2024-07-15 19:19:33.707542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.318 [2024-07-15 19:19:33.707558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.318 [2024-07-15 19:19:33.707642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.318 [2024-07-15 19:19:33.707708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.318 [2024-07-15 19:19:33.707712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.318 [2024-07-15 19:19:33.715677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.318 [2024-07-15 19:19:33.716271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.318 [2024-07-15 19:19:33.716308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.318 [2024-07-15 19:19:33.716326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.716547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.716768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.716790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.716805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.318 [2024-07-15 19:19:33.720054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.318 [2024-07-15 19:19:33.729279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.318 [2024-07-15 19:19:33.729911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.318 [2024-07-15 19:19:33.729951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.318 [2024-07-15 19:19:33.729971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.730192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.730414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.730436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.730455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.318 [2024-07-15 19:19:33.733732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.318 [2024-07-15 19:19:33.742974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.318 [2024-07-15 19:19:33.743509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.318 [2024-07-15 19:19:33.743550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.318 [2024-07-15 19:19:33.743569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.318 [2024-07-15 19:19:33.743791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.318 [2024-07-15 19:19:33.744031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.318 [2024-07-15 19:19:33.744054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.318 [2024-07-15 19:19:33.744070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.579 [2024-07-15 19:19:33.747419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.579 [2024-07-15 19:19:33.756559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.579 [2024-07-15 19:19:33.757145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.579 [2024-07-15 19:19:33.757184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.579 [2024-07-15 19:19:33.757212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.579 [2024-07-15 19:19:33.757456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.579 [2024-07-15 19:19:33.757670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.579 [2024-07-15 19:19:33.757691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.579 [2024-07-15 19:19:33.757707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.579 [2024-07-15 19:19:33.760883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.579 [2024-07-15 19:19:33.769992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.579 [2024-07-15 19:19:33.770538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.579 [2024-07-15 19:19:33.770573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.579 [2024-07-15 19:19:33.770592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.579 [2024-07-15 19:19:33.770813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.579 [2024-07-15 19:19:33.771041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.579 [2024-07-15 19:19:33.771063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.579 [2024-07-15 19:19:33.771079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.579 [2024-07-15 19:19:33.774279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.579 [2024-07-15 19:19:33.783607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.579 [2024-07-15 19:19:33.784196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.579 [2024-07-15 19:19:33.784239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.579 [2024-07-15 19:19:33.784258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.579 [2024-07-15 19:19:33.784497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.579 [2024-07-15 19:19:33.784711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.579 [2024-07-15 19:19:33.784732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.579 [2024-07-15 19:19:33.784748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.579 [2024-07-15 19:19:33.788027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.579 [2024-07-15 19:19:33.797101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.579 [2024-07-15 19:19:33.797590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.579 [2024-07-15 19:19:33.797619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.579 [2024-07-15 19:19:33.797636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.579 [2024-07-15 19:19:33.797895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.579 [2024-07-15 19:19:33.798129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.579 [2024-07-15 19:19:33.798151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.579 [2024-07-15 19:19:33.798165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.579 [2024-07-15 19:19:33.801329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.579 [2024-07-15 19:19:33.810591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.579 [2024-07-15 19:19:33.811030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.579 [2024-07-15 19:19:33.811059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.579 [2024-07-15 19:19:33.811075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.579 [2024-07-15 19:19:33.811289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.579 [2024-07-15 19:19:33.811507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.579 [2024-07-15 19:19:33.811528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.579 [2024-07-15 19:19:33.811541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.579 [2024-07-15 19:19:33.814758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.579 [2024-07-15 19:19:33.824124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.579 [2024-07-15 19:19:33.824514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.579 [2024-07-15 19:19:33.824543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.579 [2024-07-15 19:19:33.824559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.580 [2024-07-15 19:19:33.824773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.580 [2024-07-15 19:19:33.825008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.580 [2024-07-15 19:19:33.825030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.580 [2024-07-15 19:19:33.825045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 [2024-07-15 19:19:33.828327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.580 [2024-07-15 19:19:33.837765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.580 [2024-07-15 19:19:33.838221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.580 [2024-07-15 19:19:33.838250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.580 [2024-07-15 19:19:33.838271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.580 [2024-07-15 19:19:33.838506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.580 [2024-07-15 19:19:33.838718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.580 [2024-07-15 19:19:33.838740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.580 [2024-07-15 19:19:33.838754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.580 [2024-07-15 19:19:33.842069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 [2024-07-15 19:19:33.851177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.580 [2024-07-15 19:19:33.851618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.580 [2024-07-15 19:19:33.851647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.580 [2024-07-15 19:19:33.851663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.580 [2024-07-15 19:19:33.851915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.580 [2024-07-15 19:19:33.852139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.580 [2024-07-15 19:19:33.852160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.580 [2024-07-15 19:19:33.852173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.580 [2024-07-15 19:19:33.854410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.580 [2024-07-15 19:19:33.855380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 [2024-07-15 19:19:33.864845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.580 [2024-07-15 19:19:33.865279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.580 [2024-07-15 19:19:33.865307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.580 [2024-07-15 19:19:33.865323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.580 [2024-07-15 19:19:33.865564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.580 [2024-07-15 19:19:33.865768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.580 [2024-07-15 19:19:33.865788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.580 [2024-07-15 19:19:33.865801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.580 [2024-07-15 19:19:33.869007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.580 [2024-07-15 19:19:33.878336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.580 [2024-07-15 19:19:33.878796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.580 [2024-07-15 19:19:33.878825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.580 [2024-07-15 19:19:33.878843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.580 [2024-07-15 19:19:33.879067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.580 [2024-07-15 19:19:33.879303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.580 [2024-07-15 19:19:33.879324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.580 [2024-07-15 19:19:33.879338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.580 [2024-07-15 19:19:33.882491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.580 [2024-07-15 19:19:33.891787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.580 [2024-07-15 19:19:33.892625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.580 [2024-07-15 19:19:33.892694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.580 [2024-07-15 19:19:33.892730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.580 [2024-07-15 19:19:33.892973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.580 [2024-07-15 19:19:33.893197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.580 [2024-07-15 19:19:33.893220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.580 [2024-07-15 19:19:33.893251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.580 Malloc0 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 [2024-07-15 19:19:33.896512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 [2024-07-15 19:19:33.905421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.580 [2024-07-15 19:19:33.905858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.580 [2024-07-15 19:19:33.905895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb4ac0 with addr=10.0.0.2, port=4420 00:24:53.580 [2024-07-15 19:19:33.905914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb4ac0 is same with the state(5) to be set 00:24:53.580 [2024-07-15 19:19:33.906129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb4ac0 (9): Bad file descriptor 00:24:53.580 [2024-07-15 19:19:33.906357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.580 [2024-07-15 19:19:33.906378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.580 [2024-07-15 19:19:33.906401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.580 [2024-07-15 19:19:33.909660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.580 [2024-07-15 19:19:33.914495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.580 19:19:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3408383 00:24:53.580 [2024-07-15 19:19:33.919002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.841 [2024-07-15 19:19:34.082662] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:03.821 00:25:03.821 Latency(us) 00:25:03.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.821 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:03.821 Verification LBA range: start 0x0 length 0x4000 00:25:03.821 Nvme1n1 : 15.01 6690.93 26.14 9330.68 0.00 7965.43 819.20 22524.97 00:25:03.821 =================================================================================================================== 00:25:03.821 Total : 6690.93 26.14 9330.68 0.00 7965.43 819.20 22524.97 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.821 rmmod nvme_tcp 00:25:03.821 rmmod nvme_fabrics 00:25:03.821 rmmod nvme_keyring 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3409052 ']' 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3409052 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3409052 ']' 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3409052 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3409052 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3409052' 00:25:03.821 killing process with pid 3409052 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3409052 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3409052 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.821 19:19:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.213 19:19:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.213 00:25:05.213 real 0m22.334s 00:25:05.213 user 0m59.525s 00:25:05.213 sys 0m4.278s 00:25:05.213 19:19:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.213 19:19:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:05.213 ************************************ 00:25:05.213 END TEST nvmf_bdevperf 00:25:05.213 ************************************ 00:25:05.213 19:19:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:05.213 19:19:45 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:05.213 19:19:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:05.213 19:19:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.213 19:19:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:05.213 ************************************ 00:25:05.213 START TEST nvmf_target_disconnect 00:25:05.213 ************************************ 00:25:05.213 19:19:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:05.471 * Looking for test storage... 00:25:05.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.471 19:19:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.472 19:19:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.399 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:07.400 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:07.400 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:07.400 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:07.400 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:07.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:25:07.400 00:25:07.400 --- 10.0.0.2 ping statistics --- 00:25:07.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.400 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:25:07.400 00:25:07.400 --- 10.0.0.1 ping statistics --- 00:25:07.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.400 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:07.400 ************************************ 00:25:07.400 START TEST nvmf_target_disconnect_tc1 00:25:07.400 ************************************ 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:07.400 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.400 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.660 [2024-07-15 19:19:47.838570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.660 [2024-07-15 19:19:47.838641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c281a0 with addr=10.0.0.2, port=4420 00:25:07.660 [2024-07-15 19:19:47.838680] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:07.660 [2024-07-15 19:19:47.838701] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:07.660 [2024-07-15 19:19:47.838715] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:07.660 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:07.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:07.660 Initializing NVMe Controllers 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.660 00:25:07.660 real 0m0.094s 00:25:07.660 user 0m0.047s 00:25:07.660 sys 0m0.047s 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:07.660 ************************************ 00:25:07.660 END TEST nvmf_target_disconnect_tc1 00:25:07.660 ************************************ 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:07.660 ************************************ 00:25:07.660 START TEST nvmf_target_disconnect_tc2 00:25:07.660 ************************************ 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3412200 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3412200 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3412200 ']' 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.660 19:19:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.660 [2024-07-15 19:19:47.945174] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:07.660 [2024-07-15 19:19:47.945284] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.660 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.660 [2024-07-15 19:19:48.009738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.918 [2024-07-15 19:19:48.124688] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.918 [2024-07-15 19:19:48.124745] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.918 [2024-07-15 19:19:48.124774] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.918 [2024-07-15 19:19:48.124785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.918 [2024-07-15 19:19:48.124794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.918 [2024-07-15 19:19:48.124949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:07.918 [2024-07-15 19:19:48.125011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:07.918 [2024-07-15 19:19:48.125077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:07.918 [2024-07-15 19:19:48.125080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.918 Malloc0 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.918 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.919 [2024-07-15 19:19:48.306563] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.919 [2024-07-15 19:19:48.334797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3412229 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:07.919 19:19:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.177 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.121 19:19:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3412200 00:25:10.121 19:19:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 [2024-07-15 19:19:50.358904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 [2024-07-15 19:19:50.359229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Read completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.121 starting I/O failed 00:25:10.121 Write completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Write completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Write completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Write completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Read completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 Write completed with error (sct=0, sc=8) 00:25:10.122 starting I/O failed 00:25:10.122 [2024-07-15 19:19:50.359553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.122 [2024-07-15 19:19:50.359798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.359830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.360003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.360031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.360185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.360212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.360365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.360391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.360559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.360602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.360866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.360936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.361083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.361286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.361312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.361478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.361504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.361731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.361757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.361929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.361957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.362106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.362132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.362304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.362330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.362505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.362531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.362810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.362860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.363007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.363034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.363193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.363226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.363395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.363422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.363597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.363623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.363836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.363893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.364052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.364079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.364225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.364251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.364401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.364446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.364634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.364664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.364888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.364933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.365074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.365101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.365277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.365304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.365492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.365518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.365753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.365783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.365974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.366003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.366162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.366191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.366359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.366386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.366524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.366551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.366747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.366790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.366988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.367016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.122 qpair failed and we were unable to recover it. 00:25:10.122 [2024-07-15 19:19:50.367157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.122 [2024-07-15 19:19:50.367187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.367360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.367401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.367593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.367622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.367869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.367905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.368063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.368089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.368264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.368290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.368499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.368528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.368712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.368742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.368916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.368945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.369092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.369120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.369340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.369365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.369551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.369578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.369747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.369773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.369941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.369968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.370114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.370141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.370319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.370345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.370490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.370533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.370727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.370756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.370969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.370997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.371195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.371221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.371438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.371464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.371668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.371702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.371861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.371901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.372091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.372118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.372318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.372345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.372642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.372668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.372880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.372907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.373071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.373097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.373264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.373291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.373435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.373464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.373641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.373668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.373836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.373874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.374047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.374073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.374251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.374277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.374472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.374499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.374694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.374724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.374897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.374925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.375093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.375120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.375357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.375384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.375551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.375577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.375785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.123 [2024-07-15 19:19:50.375811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.123 qpair failed and we were unable to recover it. 00:25:10.123 [2024-07-15 19:19:50.375996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.376023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.376219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.376246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.376460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.376490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.376741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.376771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.376995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.377023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.377237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.377266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.377444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.377471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.377679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.377706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.377850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.377888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.378055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.378081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.378292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.378335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.378517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.378547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.378715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.378741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.378910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.378937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.379132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.379159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.379327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.379354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.379549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.379575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.379760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.379787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.379996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.380024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.380199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.380226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.380419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.380449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.380636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.380665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.380853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.380889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.381063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.381090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.381246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.381273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.381486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.381516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.381706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.381733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.381906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.381934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.382145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.382184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.382367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.382398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.382590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.382616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 [2024-07-15 19:19:50.382763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.124 [2024-07-15 19:19:50.382791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:10.124 qpair failed and we were unable to recover it. 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Write completed with error (sct=0, sc=8) 00:25:10.124 starting I/O failed 00:25:10.124 Read completed with error (sct=0, sc=8) 00:25:10.125 starting I/O failed 00:25:10.125 Read completed with error (sct=0, sc=8) 00:25:10.125 starting I/O failed 00:25:10.125 Write completed with error (sct=0, sc=8) 00:25:10.125 starting I/O failed 00:25:10.125 Write completed with error (sct=0, sc=8) 00:25:10.125 starting I/O failed 00:25:10.125 Read completed with error (sct=0, sc=8) 00:25:10.125 starting I/O failed 00:25:10.125 [2024-07-15 19:19:50.383143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:10.125 [2024-07-15 19:19:50.383383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.383422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.383608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.383636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.383785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.383813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.384019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.384046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.384216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.384242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.384437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.384464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.384607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.384633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.384804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.384830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.385024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.385057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.385215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.385258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.385420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.385465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.385652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.385695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.385864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.385898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.386122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.386152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.386371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.386397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.386567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.386595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.386787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.386814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.386982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.387010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.387204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.387231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.387398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.387424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.387587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.387613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.387785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.387811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.388022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.388048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.388244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.388270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.388419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.388445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.388624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.388652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.388837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.388863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.389054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.389101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.389266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.125 [2024-07-15 19:19:50.389292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.125 qpair failed and we were unable to recover it. 00:25:10.125 [2024-07-15 19:19:50.389497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.389523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.389691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.389716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.389867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.389914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.390090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.390116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.390285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.390313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.390591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.390642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.390845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.390872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.391057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.391082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.391268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.391296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.391483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.391530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.391729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.391754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.391950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.391976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.392121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.392147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.392339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.392364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.392523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.392548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.392723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.392751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.392964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.393005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.393196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.393221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.393362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.393403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.393631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.393657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.393799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.393825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.394003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.394029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.394209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.394237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.394435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.394461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.394629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.394654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.394821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.394846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.395015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.395042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.395214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.395240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.395409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.395435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.395598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.395623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.395788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.395814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.396003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.396043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.396245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.396272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.396438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.396470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.396611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.396636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.396794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.396835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.397037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.397064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.397261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.397291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.397502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.397531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.397710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.397738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.397965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.126 [2024-07-15 19:19:50.397993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.126 qpair failed and we were unable to recover it. 00:25:10.126 [2024-07-15 19:19:50.398138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.398163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.398364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.398411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.398762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.398812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.399015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.399041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.399194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.399219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.399405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.399433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.399624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.399666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.399833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.399858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.400072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.400097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.400268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.400293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.400501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.400529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.400817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.400872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.401096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.401122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.401292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.401322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.401681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.401729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.401943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.401969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.402137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.402163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.402305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.402331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.402532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.402560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.402750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.402780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.402952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.402977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.403151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.403176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.403392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.403421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.403604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.403630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.403796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.403821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.404010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.404035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.404256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.404285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.404455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.404480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.404646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.404672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.404866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.404901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.405055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.405081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.405277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.405302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.405450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.405476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.405646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.405672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.405902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.405945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.406124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.406149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.406322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.406347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.406483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.406509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.406687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.406716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.406885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.406910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.407131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.407159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.127 [2024-07-15 19:19:50.407347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.127 [2024-07-15 19:19:50.407374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.127 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.407588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.407613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.407773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.407803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.408022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.408052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.408239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.408265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.408476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.408503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.408726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.408754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.408953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.408978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.409155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.409180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.409373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.409401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.409594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.409620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.409812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.409838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.410016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.410042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.410184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.410209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.410421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.410449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.410658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.410686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.410883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.410921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.411110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.411135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.411331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.411356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.411506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.411532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.411708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.411734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.411900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.411927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.412092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.412118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.412308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.412335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.412516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.412545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.412726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.412751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.412942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.412971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.413182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.413209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.413422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.413447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.413637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.413665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.413861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.413892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.414076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.414101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.414328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.414549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.414770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.414798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.414998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.415023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.415214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.415239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.415475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.415501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.415694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.415723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.415934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.415963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.416160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.416187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.416377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.128 [2024-07-15 19:19:50.416406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.128 qpair failed and we were unable to recover it. 00:25:10.128 [2024-07-15 19:19:50.416598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.416627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.416850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.416891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.417060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.417087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.417258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.417283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.417431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.417461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.417628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.417654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.417825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.417850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.418058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.418083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.418269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.418297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.418483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.418512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.418675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.418701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.418843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.418868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.419021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.419047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.419248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.419274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.419470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.419499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.419705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.419733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.419946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.419972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.420138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.420166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.420334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.420364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.420555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.420581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.420773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.420799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.420964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.420990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.421157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.421183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.421351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.421380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.421575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.421601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.421764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.421790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.421952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.421978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.422111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.422135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.422298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.422324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.422471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.422495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.422662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.422687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.422895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.422938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.423078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.423104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.423323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.423352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.423514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.423539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.423672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.423696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.423898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.423927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.424106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.424132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.129 qpair failed and we were unable to recover it. 00:25:10.129 [2024-07-15 19:19:50.424346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.129 [2024-07-15 19:19:50.424375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.424557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.424587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.424746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.424771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.424959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.424988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.425205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.425231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.425400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.425426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.425603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.425631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.425851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.425885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.426080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.426105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.426243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.426268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.426413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.426454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.426625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.426650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.426845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.426871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.427069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.427095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.427229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.427255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.427469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.427495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.427681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.427708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.427902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.427928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.428143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.428172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.428391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.428420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.428591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.428617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.428792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.428818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.428984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.429011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.429180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.429205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.429382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.429408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.429600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.429628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.429825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.429851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.430000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.430026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.430220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.430247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.430440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.430465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.430658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.430686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.430887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.430914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.431080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.431106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.431259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.431287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.431472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.431504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.431694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.431719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.431890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.431918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.432103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.130 [2024-07-15 19:19:50.432131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.130 qpair failed and we were unable to recover it. 00:25:10.130 [2024-07-15 19:19:50.432349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.432374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.432537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.432565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.432783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.432808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.432978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.433005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.433172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.433198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.433368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.433395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.433609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.433634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.433799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.433828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.434031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.434058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.434214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.434241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.434447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.434473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.434687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.434715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.434941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.434967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.435139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.435164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.435350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.435386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.435592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.435618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.435787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.435813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.435962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.435989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.436159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.436185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.436324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.436349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.436538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.436564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.436768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.436796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.436981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.437006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.437148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.437190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.437389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.437415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.437580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.437606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.437792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.437820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.437987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.438013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.438156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.438181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.438388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.438416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.438599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.438625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.438812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.438840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.439037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.439063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.439207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.439232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.439396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.439422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.439593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.439635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.439821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.439846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.440066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.440098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.440292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.440320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.440529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.440555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.440780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.440809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.131 [2024-07-15 19:19:50.441011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.131 [2024-07-15 19:19:50.441037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.131 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.441200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.441226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.441387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.441416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.441622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.441647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.441843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.441868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.442024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.442050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.442222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.442265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.442482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.442508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.442688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.442716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.442903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.442930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.443095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.443121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.443279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.443307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.443497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.443526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.443713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.443739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.443893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.443922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.444129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.444157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.444339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.444364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.444562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.444588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.444788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.444814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.445012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.445039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.445185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.445209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.445373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.445398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.445545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.445570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.445765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.445795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.446010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.446036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.446201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.446226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.446391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.446419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.446606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.446634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.446822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.446849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.446998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.447023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.447195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.447220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.447417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.447443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.447578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.447604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.447774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.447799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.447963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.447990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.448176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.448205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.448374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.448400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.448573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.448602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.448783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.448811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.449000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.449027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.449192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.449218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.449434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.132 [2024-07-15 19:19:50.449463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.132 qpair failed and we were unable to recover it. 00:25:10.132 [2024-07-15 19:19:50.449638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.449667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.449885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.449911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.450100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.450128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.450312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.450341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.450510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.450535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.450678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.450704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.450890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.450918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.451108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.451134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.451278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.451304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.451449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.451491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.451658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.451684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.451831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.451856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.452010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.452036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.452179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.452204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.452373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.452399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.452597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.452623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.452819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.452845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.453064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.453094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.453312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.453341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.453529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.453555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.453777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.453805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.453969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.454000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.454214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.454244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.454449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.454478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.454661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.454689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.454886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.454912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.455082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.455108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.455278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.455303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.455452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.455478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.455670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.455695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.455894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.455923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.456139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.456164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.456381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.456407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.456579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.456604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.456804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.456828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.456996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.457022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.457191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.457219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.457434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.457460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.457681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.457710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.457871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.457905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.458085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.458110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.133 qpair failed and we were unable to recover it. 00:25:10.133 [2024-07-15 19:19:50.458280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.133 [2024-07-15 19:19:50.458306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.458519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.458547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.458718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.458744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.458912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.458937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.459079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.459105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.459297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.459322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.459499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.459525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.459704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.459731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.459913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.459943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.460090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.460115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.460281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.460307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.460478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.460504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.460727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.460756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.460923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.460949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.461145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.461171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.461365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.461393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.461609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.461635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.461779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.461805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.461951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.461978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.462163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.462192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.462389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.462415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.462579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.462620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.462790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.462818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.463004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.463030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.463240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.463268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.463483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.463508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.463653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.463679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.463841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.463866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.464080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.464109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.464320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.464345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.464539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.464564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.464755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.464783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.464948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.464975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.465149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.465174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.465352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.134 [2024-07-15 19:19:50.465377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.134 qpair failed and we were unable to recover it. 00:25:10.134 [2024-07-15 19:19:50.465572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.465598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.465819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.465848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.466071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.466097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.466245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.466273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.466445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.466470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.466670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.466694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.466884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.466925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.467095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.467119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.467282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.467305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.467466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.467490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.467659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.467700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.467916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.467942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.468111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.468136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.468299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.468324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.468553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.468582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.468753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.468779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.468995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.469024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.469205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.469236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.469457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.469482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.469673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.469702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.469926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.469953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.470098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.470124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.470290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.470316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.470483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.470525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.470718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.470745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.470951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.470981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.471169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.471198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.471411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.471436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.471580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.471606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.471791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.471820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.472029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.472055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.472215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.472242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.472452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.472481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.472668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.472694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.472886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.472920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.473090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.473116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.473297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.473323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.473494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.473520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.473699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.473724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.473919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.473944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.474079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.474121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.135 [2024-07-15 19:19:50.474279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.135 [2024-07-15 19:19:50.474309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.135 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.474503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.474528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.474665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.474691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.474842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.474868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.475045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.475070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.475237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.475264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.475452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.475480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.475664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.475690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.475880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.475923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.476124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.476150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.476316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.476341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.476509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.476553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.476750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.476776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.476931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.476958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.477127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.477163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.477302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.477327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.477469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.477495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.477696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.477721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.477889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.477932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.478083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.478108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.478285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.478311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.478461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.478487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.478627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.478653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.478828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.478854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.479077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.479118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.479277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.479306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.479461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.479489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.479662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.479688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.479893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.479923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.480081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.480107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.480328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.480357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.480554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.480580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.480751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.480796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.480972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.480999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.481170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.481195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.481405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.481433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.481618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.481646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.481822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.481848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.481994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.482020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.482202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.482230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.482416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.482442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.482590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.136 [2024-07-15 19:19:50.482644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.136 qpair failed and we were unable to recover it. 00:25:10.136 [2024-07-15 19:19:50.482838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.482872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.483059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.483085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.483249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.483274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.483444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.483470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.483641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.483668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.483897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.483941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.484083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.484108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.484279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.484304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.484446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.484472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.484613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.484638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.484829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.484854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.485028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.485054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.485277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.485306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.485478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.485503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.485695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.485723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.485892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.485921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.486109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.486134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.486328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.486357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.486512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.486540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.486738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.486764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.486935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.486961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.487129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.487155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.487349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.487375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.487544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.487569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.487727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.487755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.487911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.487937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.488080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.488109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.488258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.488300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.488467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.488492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.488642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.488667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.488858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.488889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.489057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.489083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.489299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.489328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.489540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.489591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.489818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.489844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.490014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.490040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.490196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.490224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.490437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.490463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.490640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.490666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.490860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.490893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.491041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.137 [2024-07-15 19:19:50.491067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.137 qpair failed and we were unable to recover it. 00:25:10.137 [2024-07-15 19:19:50.491268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.491296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.491612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.491673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.491860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.491896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.492041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.492067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.492306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.492332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.492475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.492500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.492667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.492703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.492897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.492924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.493089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.493115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.493314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.493339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.493549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.493578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.493769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.493794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.493942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.493968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.494175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.494212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.494372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.494398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.494544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.494570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.494782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.494811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.494973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.494999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.495132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.495175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.495382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.495411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.495596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.495622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.495810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.495837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.496065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.496091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.496281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.496307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.496501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.496529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.496705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.496742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.496959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.496997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.497180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.497208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.497372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.497398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.497598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.497624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.497812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.497840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.498038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.498066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.498218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.498244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.498390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.138 [2024-07-15 19:19:50.498417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.138 qpair failed and we were unable to recover it. 00:25:10.138 [2024-07-15 19:19:50.498627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.498656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.498819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.498844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.499027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.499056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.499271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.499300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.499512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.499538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.499738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.499766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.499982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.500009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.500182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.500207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.500377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.500403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.500548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.500574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.500739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.500765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.500955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.500981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.501125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.501151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.501291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.501317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.501488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.501514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.501697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.501723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.501874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.501907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.502078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.502104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.502250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.502276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.502442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.502664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.502692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.502852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.502887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.503075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.503101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.503277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.503305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.503494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.503523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.503714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.503740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.503914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.503941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.504151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.504181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.504377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.504403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.504568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.504594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.504770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.504796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.504944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.504970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.505109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.505135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.505305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.505331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.505540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.505566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.505782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.505810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.505998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.506027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.506216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.506241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.506413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.506439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.506602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.506631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.506819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.506845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.139 [2024-07-15 19:19:50.507017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-07-15 19:19:50.507044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.139 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.507276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.507305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.507488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.507514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.507706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.507735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.507951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.507977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.508147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.508173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.508329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.508358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.508548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.508574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.508715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.508742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.508943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.508973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.509193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.509218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.509388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.509414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.509603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.509632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.509825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.509853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.510048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.510075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.510248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.510278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.510464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.510490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.510652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.510684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.510842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.510869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.511029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.511062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.511264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.511291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.511452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.511479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.511664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.511692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.511884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.511910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.512107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.512133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.512334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.512360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.512528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.512553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.512745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.512774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.512945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.512971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.513119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.513146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.513324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.513350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.513487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.513528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.513748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.513774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.513951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.513980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.514177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.514204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.514348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.514374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.514581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.514609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.514799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.514824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.514998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.515024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.515190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.515215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.515382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.515408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.515564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.140 [2024-07-15 19:19:50.515589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.140 qpair failed and we were unable to recover it. 00:25:10.140 [2024-07-15 19:19:50.515732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.515756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.515898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.515924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.516130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.516157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.516349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.516378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.516532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.516560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.516778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.516805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.516961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.516990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.517200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.517229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.517437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.517463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.517622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.517647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.517814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.517838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.518016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.518043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.518211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.518238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.518437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.518466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.518645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.518671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.518848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.518903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.519074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.519103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.519321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.519347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.519511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.519539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.519689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.519716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.519905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.519931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.520120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.520149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.520321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.520350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.520502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.520528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.520717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.520746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.520949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.520975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.521146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.521176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.521341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.521367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.521532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.521559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.521778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.521802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.521952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.521977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.522152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.522182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.522381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.522407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.522571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.522597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.522764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.522807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.523008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.523033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.523218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.523247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.523460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.523489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.523713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.523739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.523886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.523912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.524073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.524098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.141 [2024-07-15 19:19:50.524292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.141 [2024-07-15 19:19:50.524317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.141 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.524523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.524552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.524742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.524768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.524946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.524972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.525122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.525153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.525298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.525323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.525488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.525514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.525703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.525731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.525919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.525945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.526084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.526109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.526327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.526355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.526513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.526541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.526755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.526780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.526952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.526982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.527167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.527201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.527409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.527434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.527657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.527683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.527849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.527875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.528055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.528081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.528251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.528276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.528444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.528469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.528642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.528668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.528862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.528897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.529083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.529109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.529304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.529330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.529475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.529501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.529694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.529719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.529855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.529887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.530038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.530064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.530248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.530467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.530492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.530657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.530683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.530904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.530933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.531121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.531148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.531316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.531343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.142 qpair failed and we were unable to recover it. 00:25:10.142 [2024-07-15 19:19:50.531512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.142 [2024-07-15 19:19:50.531538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.531680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.531707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.531894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.531923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.532110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.532147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.532290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.532316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.532482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.532510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.532686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.532729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.532890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.532917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.533077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.533113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.533323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.533351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.533541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.533571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.533755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.533783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.533976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.534001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.534164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.534189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.534376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.534404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.534586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.534615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.534761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.534789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.534982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.535007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.535190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.535215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.535379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.535405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.535619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.535648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.535841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.535867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.536074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.536111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.536256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.536301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.143 [2024-07-15 19:19:50.536499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.143 [2024-07-15 19:19:50.536535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.143 qpair failed and we were unable to recover it. 00:25:10.430 [2024-07-15 19:19:50.536706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.430 [2024-07-15 19:19:50.536732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.430 qpair failed and we were unable to recover it. 00:25:10.430 [2024-07-15 19:19:50.536898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.430 [2024-07-15 19:19:50.536927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.430 qpair failed and we were unable to recover it. 00:25:10.430 [2024-07-15 19:19:50.537148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.537173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.537348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.537375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.537571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.537597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.537811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.537839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.538036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.538061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.538226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.538251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.538386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.538419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.538568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.538594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.538732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.538757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.538917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.538943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.539085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.539115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.539244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.539419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.539461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.539655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.539680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.539834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.539862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.540034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.540060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.540245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.540271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.540405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.540430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.540624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.540653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.540833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.540861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.541062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.541089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.541308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.541336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.541530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.541556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.541767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.541795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.541996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.542022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.542192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.542218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.542385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.542413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.542611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.542638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.542803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.542829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.543001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.543027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.543231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.543256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.543455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.543481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.543672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.543701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.543916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.543942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.544090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.544116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.544253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.544280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.544472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.544514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.544774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.544801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.545013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.545039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.545206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.545232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.431 qpair failed and we were unable to recover it. 00:25:10.431 [2024-07-15 19:19:50.545511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.431 [2024-07-15 19:19:50.545539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.545729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.545754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.545953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.545979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.546157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.546183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.546370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.546398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.546606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.546634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.546865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.546901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.547107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.547132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.547299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.547325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.547691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.547741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.547926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.547967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.548105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.548150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.548339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.548365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.548533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.548559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.548751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.548776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.548943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.548969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.549106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.549147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.549330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.549359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.549577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.549629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.549822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.550043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.550069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.550238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.550264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.550430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.550456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.550616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.550658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.550835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.550860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.551035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.551061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.551204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.551231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.551562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.551614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.551799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.551827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.552027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.552054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.552223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.552249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.552445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.552473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.552694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.552720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.552863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.552894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.553066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.553092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.553285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.553311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.553478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.553520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.553673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.553702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.553894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.553941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.554141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.554167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.554338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.554364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.554503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.432 [2024-07-15 19:19:50.554529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.432 qpair failed and we were unable to recover it. 00:25:10.432 [2024-07-15 19:19:50.554709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.554738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.554937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.554964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.555097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.555123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.555288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.555314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.555486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.555514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.555695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.555724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.555938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.555965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.556094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.556120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.556351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.556380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.556612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.556640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.556830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.556859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.557032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.557058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.557272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.557299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.557480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.557508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.557689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.557717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.557875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.557915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.558080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.558106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.558270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.558295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.558458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.558484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.558650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.558678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.558860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.558895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.559104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.559129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.559297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.559322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.559495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.559520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.559694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.559719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.559904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.559933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.560089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.560116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.560308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.560334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.560500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.560525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.560693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.560718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.560885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.560911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.561099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.561124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.561319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.561350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.561544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.561570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.561781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.561807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.561951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.561976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.562119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.562144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.562358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.562391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.562584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.562610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.562798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.562824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.563015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.563044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.433 qpair failed and we were unable to recover it. 00:25:10.433 [2024-07-15 19:19:50.563260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.433 [2024-07-15 19:19:50.563285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.563464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.563488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.563673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.563701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.563895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.563921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.564068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.564093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.564239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.564282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.564475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.564501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.564648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.564675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.564887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.564916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.565073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.565100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.565292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.565318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.565469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.565498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.565706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.565734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.565953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.565980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.566144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.566169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.566337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.566364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.566537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.566563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.566711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.566735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.566936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.566961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.567099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.567124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.567302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.567331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.567553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.567579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.567737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.567762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.567972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.568009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.568197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.568224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.568420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.568445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.568645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.568673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.568845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.568873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.569053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.569079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.569243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.569268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.569452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.569480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.569646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.569672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.569853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.569887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.570084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.570108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.570314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.570339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.570530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.570555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.570697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.570722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.570934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.570961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.571174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.571202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.571349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.571378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.571595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.571621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.571797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.434 [2024-07-15 19:19:50.571822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.434 qpair failed and we were unable to recover it. 00:25:10.434 [2024-07-15 19:19:50.572028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.572055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.572228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.572254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.572450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.572478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.572692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.572720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.572939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.572966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.573194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.573220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.573425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.573450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.573645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.573671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.573832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.573857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.574035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.574061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.574228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.574254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.574424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.574450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.574639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.574668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.574842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.574868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.575020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.575046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.575243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.575268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.575465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.575490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.575658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.575686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.575901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.575927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.576070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.576105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.576239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.576279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.576438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.576469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.576683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.576721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.576958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.576984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.577141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.577166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.577323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.577349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.577519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.577548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.577743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.577771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.577963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.577990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.578124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.578150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.578313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.578339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.578482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.578508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.578677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.435 [2024-07-15 19:19:50.578720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.435 qpair failed and we were unable to recover it. 00:25:10.435 [2024-07-15 19:19:50.578905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.578932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.579092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.579118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.579305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.579334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.579487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.579515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.579699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.579725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.579864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.579895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.580067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.580094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.580295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.580321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.580514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.580543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.580723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.580749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.580975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.581002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.581170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.581196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.581358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.581386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.581570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.581596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.581798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.581823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.582068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.582098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.582308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.582333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.582480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.582506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.582718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.582746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.582967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.582993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.583188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.583217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.583411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.583436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.583644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.583669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.583849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.583883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.584100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.584128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.584343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.584369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.584554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.584583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.584768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.584797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.585006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.585035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.585205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.585231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.585472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.585498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.585666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.585691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.585884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.585913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.586090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.586119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.586312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.586337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.586524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.586552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.586728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.586753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.586924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.586950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.587116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.587142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.587286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.587311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.436 [2024-07-15 19:19:50.587478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.436 [2024-07-15 19:19:50.587503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.436 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.587694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.587723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.587865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.587900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.588088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.588114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.588312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.588340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.588491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.588518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.588728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.588753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.588960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.588990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.589174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.589202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.589358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.589384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.589526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.589568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.589744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.589772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.589963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.589990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.590204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.590232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.590418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.590446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.590633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.590659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.590798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.590823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.590966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.590997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.591166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.591191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.591355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.591385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.591567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.591595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.591778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.591807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.592003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.592030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.592200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.592226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.592370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.592586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.592612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.592835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.592863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.593087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.593114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.593308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.593336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.593520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.593548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.593740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.593768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.593922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.593948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.594107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.594133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.594272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.594297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.594512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.594540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.594746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.594774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.594959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.594985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.595181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.595207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.595427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.595455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.595680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.595705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.595895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.595924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.596114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.437 [2024-07-15 19:19:50.596140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.437 qpair failed and we were unable to recover it. 00:25:10.437 [2024-07-15 19:19:50.596309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.596336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.596525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.596554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.596733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.596761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.596978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.597005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.597223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.597252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.597436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.597464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.597653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.597679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.597831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.597857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.598040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.598066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.598291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.598317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.598551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.598576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.598742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.598768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.598940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.598966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.599133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.599159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.599371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.599400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.599616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.599642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.599841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.599870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.600056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.600082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.600269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.600294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.600492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.600519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.600679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.600707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.600970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.601005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.601222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.601250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.601434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.601463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.601654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.601680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.601873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.601907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.602122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.602165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.602333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.602359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.602532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.602557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.602746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.602774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.602974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.603002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.603190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.603218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.603404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.603432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.603619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.603644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.603851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.603883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.604067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.604092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.604294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.604323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.604506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.604534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.604714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.604742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.604950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.604976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.605115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.605156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.438 [2024-07-15 19:19:50.605330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.438 [2024-07-15 19:19:50.605358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.438 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.605556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.605585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.605768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.605801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.606030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.606057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.606232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.606257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.606439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.606472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.606660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.606688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.606887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.606913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.607057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.607084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.607270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.607299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.607491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.607516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.607687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.607716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.607886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.607912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.608082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.608108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.608294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.608323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.608532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.608560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.608747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.608777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.608964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.608989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.609123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.609149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.609360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.609416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.609572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.609601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.609785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.609812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.610030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.610071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.610269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.610297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.610494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.610543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.610712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.610798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.610976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.611003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.611176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.611220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.611388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.611433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.611628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.611681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.611832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.611868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.612025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.612051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.612258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.612286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.612508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.612552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.612730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.612756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.439 qpair failed and we were unable to recover it. 00:25:10.439 [2024-07-15 19:19:50.612922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.439 [2024-07-15 19:19:50.612952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.613135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.613177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.613397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.613441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.613634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.613678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.613817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.613843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.614022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.614068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.614262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.614305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.614478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.614523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.614687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.614714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.614866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.614899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.615091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.615134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.615317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.615362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.615524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.615568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.615753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.615780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.615979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.616185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.616231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.616441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.616470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.616636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.616663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.616858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.616899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.617080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.617125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.617333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.617377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.617580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.617626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.617825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.617852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.618024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.618068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.618228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.618276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.618501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.618545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.618743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.618769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.618959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.619004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.619175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.619220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.619383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.619428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.619602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.619646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.619842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.619868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.620062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.620107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.620264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.620309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.620501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.620550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.620711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.620752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.620939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.620967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.621116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.621142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.621341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.621369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.621547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.621576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.621764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.621792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.440 qpair failed and we were unable to recover it. 00:25:10.440 [2024-07-15 19:19:50.621969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.440 [2024-07-15 19:19:50.621996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.622182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.622210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.622375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.622404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.622743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.622795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.622999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.623026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.623243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.623272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.623499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.623525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.623717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.623746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.623945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.623971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.624164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.624190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.624363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.624391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.624599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.624630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.624861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.624896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.625087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.625113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.625317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.625346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.625527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.625555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.625760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.625789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.625960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.625986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.626155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.626180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.626361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.626389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.626576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.626605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.626823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.626852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.627059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.627086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.627309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.627337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.627611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.627660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.627825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.627854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.628019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.628045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.628242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.628268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.628430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.628458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.628620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.628649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.628805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.628834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.629003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.629029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.629244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.629273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.629473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.629502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.629662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.629695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.629895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.629922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.630095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.630121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.630329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.630358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.630545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.630573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.630742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.630768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.630916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.630943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.631111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.441 [2024-07-15 19:19:50.631137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.441 qpair failed and we were unable to recover it. 00:25:10.441 [2024-07-15 19:19:50.631410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.631462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.631671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.631699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.631893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.631934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.632102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.632128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.632309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.632338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.632566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.632594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.632822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.632847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.633018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.633044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.633206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.633235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.633582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.633634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.633821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.633850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.634045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.634072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.634258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.634284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.634475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.634504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.634694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.634723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.634946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.634972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.635117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.635143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.635326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.635355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.635559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.635585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.635774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.635803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.635971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.636001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.636194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.636220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.636408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.636437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.636626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.636655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.636847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.636873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.637095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.637124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.637273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.637303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.637492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.637518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.637708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.637736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.637914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.637944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.638160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.638187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.638398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.638427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.638589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.638617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.638809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.638835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.639042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.639071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.639255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.639284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.639443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.639469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.639668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.639697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.639881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.639910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.640107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.640133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.640336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.640364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.442 [2024-07-15 19:19:50.640518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.442 [2024-07-15 19:19:50.640548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.442 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.640728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.640757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.640907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.640952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.641100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.641126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.641361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.641388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.641620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.641646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.641841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.641867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.642073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.642105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.642303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.642331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.642488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.642517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.642698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.642724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.642919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.642948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.643128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.643156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.643323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.643357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.643542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.643570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.643751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.643780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.643997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.644024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.644185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.644215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.644433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.644462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.644661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.644691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.644887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.644918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.645092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.645121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.645315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.645341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.645515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.645541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.645709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.645736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.645915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.645942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.646107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.646136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.646326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.646352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.646523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.646549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.646711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.646737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.646929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.646960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.647161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.647187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.647371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.647400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.647566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.647595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.647809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.647835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.648035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.648065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.443 [2024-07-15 19:19:50.648232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.443 [2024-07-15 19:19:50.648260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.443 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.648446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.648472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.648687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.648716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.648881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.648910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.649103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.649129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.649311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.649340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.649530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.649559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.649747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.649773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.649941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.649968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.650164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.650192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.650376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.650403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.650629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.650658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.650818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.650846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.651075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.651101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.651265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.651293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.651505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.651534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.651691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.651717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.651909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.651939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.652120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.652149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.652342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.652368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.652582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.652611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.652768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.652797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.653019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.653045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.653237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.653265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.653452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.653481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.653635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.653661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.653811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.653837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.654011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.654038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.654206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.654232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.654414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.654443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.654641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.654667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.654889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.654933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.655098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.655124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.655318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.655347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.655561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.655587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.655785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.655813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.656025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.656054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.656237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.656262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.656448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.656477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.656661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.656692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.656882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.656908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.657103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.444 [2024-07-15 19:19:50.657132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.444 qpair failed and we were unable to recover it. 00:25:10.444 [2024-07-15 19:19:50.657291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.657320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.657515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.657541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.657703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.657729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.657918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.657948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.658137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.658165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.658328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.658356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.658570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.658599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.658764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.658790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.658948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.658974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.659168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.659201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.659488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.659540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.659733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.659759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.659978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.660007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.660161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.660190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.660336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.660365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.660579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.660605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.660791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.660820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.661005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.661034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.661237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.661296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.661497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.661523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.661709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.661737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.661922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.661952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.662100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.662128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.662348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.662374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.662561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.662589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.662774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.662803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.662982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.663011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.663204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.663229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.663422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.663448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.663613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.663642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.663800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.663828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.663991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.664017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.664189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.664215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.664407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.664435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.664598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.664627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.664782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.664809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.664983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.665010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.665164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.665193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.665463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.665513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.665764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.665793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.665991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.445 [2024-07-15 19:19:50.666017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.445 qpair failed and we were unable to recover it. 00:25:10.445 [2024-07-15 19:19:50.666207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.666236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.666607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.666665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.666882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.666909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.667067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.667096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.667307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.667336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.667621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.667681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.667871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.667901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.668084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.668113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.668304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.668330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.668474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.668506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.668668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.668693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.668884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.668913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.669126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.669155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.669345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.669374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.669556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.669582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.669770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.669798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.669980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.670009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.670191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.670219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.670408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.670435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.670616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.670645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.670831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.670860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.671070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.671097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.671288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.671314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.671535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.671564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.671777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.671806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.672021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.672050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.672237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.672263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.672448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.672477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.672659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.672688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.672838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.672867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.673073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.673099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.673314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.673343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.673534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.673563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.673772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.673801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.673970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.673996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.674174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.674203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.674388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.674421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.674681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.674734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.674958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.674984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.675202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.675230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.675416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.446 [2024-07-15 19:19:50.675445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.446 qpair failed and we were unable to recover it. 00:25:10.446 [2024-07-15 19:19:50.675717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.675769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.675955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.675982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.676152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.676181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.676358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.676387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.676700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.676752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.676932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.676958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.677142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.677170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.677387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.677416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.677633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.677661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.677847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.677874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.678081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.678110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.678266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.678296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.678603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.678664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.678929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.678956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.679166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.679194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.679384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.679413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.679737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.679786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.679958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.679984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.680134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.680160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.680350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.680379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.680690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.680749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.680983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.681009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.681219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.681256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.681474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.681503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.681764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.681790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.681963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.681990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.682161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.682190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.682399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.682428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.682789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.682839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.683046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.683073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.683275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.683303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.683464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.683493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.683643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.683673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.683860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.683892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.684077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.684105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.684303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.684331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.684538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.684568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.684741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.684767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.684997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.685026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.685234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.685263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.685417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.685446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.447 qpair failed and we were unable to recover it. 00:25:10.447 [2024-07-15 19:19:50.685634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.447 [2024-07-15 19:19:50.685659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.685875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.685908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.686116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.686144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.686511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.686563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.686744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.686770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.687024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.687053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.687264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.687293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.687655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.687706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.687930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.687956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.688129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.688157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.688345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.688374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.688669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.688731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.688925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.688951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.689097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.689123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.689332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.689360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.689613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.689641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.689796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.689823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.690038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.690068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.690283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.690312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.690507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.690533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.690701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.690727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.690894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.690923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.691112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.691145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.691414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.691474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.691656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.691682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.691896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.691926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.692089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.692119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.692298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.692363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.692545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.692572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.692760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.692788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.693047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.693077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.693270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.693299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.693547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.693572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.693739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.693768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.693919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.693948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.694133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.694162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.448 qpair failed and we were unable to recover it. 00:25:10.448 [2024-07-15 19:19:50.694357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.448 [2024-07-15 19:19:50.694384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.694576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.694605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.694767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.694796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.694971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.695001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.695162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.695188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.695381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.695410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.695572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.695601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.695814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.695842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.696097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.696123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.696318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.696346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.696541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.696570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.696792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.696818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.696954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.696980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.697195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.697224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.697415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.697444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.697604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.697632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.697843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.697869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.698127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.698156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.698365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.698394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.698685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.698749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.698943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.698969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.699148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.699176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.699356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.699385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.699547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.699576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.699751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.699780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.700004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.700031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.700210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.700235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.700488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.700542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.700733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.700759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.700971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.701000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.701174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.701203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.701384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.701412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.701627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.701652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.701841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.701869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.702070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.702099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.702264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.702293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.702458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.702484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.702628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.702672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.702889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.702918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.703082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.703108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.703300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.703326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.703481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.449 [2024-07-15 19:19:50.703510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.449 qpair failed and we were unable to recover it. 00:25:10.449 [2024-07-15 19:19:50.703696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.703725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.703901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.703930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.704147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.704173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.704366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.704395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.704539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.704568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.704749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.704777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.704966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.705181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.705210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.705390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.705419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.705678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.705728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.705947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.705973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.706163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.706191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.706376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.706409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.706658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.706710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.706922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.706948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.707143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.707172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.707334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.707363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.707535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.707564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.707750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.707776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.707963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.707992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.708174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.708202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.708454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.708507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.708728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.708754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.708956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.708986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.709145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.709183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.709408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.709434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.709655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.709681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.709953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.709982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.710192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.710221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.710384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.710413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.710628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.710654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.710864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.710919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.711144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.711170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.711377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.711406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.711635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.711695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.711912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.711939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.712154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.712192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.712369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.712398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.712617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.712675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.712898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.712925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.713126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.713154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.450 [2024-07-15 19:19:50.713311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.450 [2024-07-15 19:19:50.713340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.450 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.713557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.713583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.713784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.713812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.714009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.714035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.714227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.714256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.714441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.714471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.714670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.714696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.714889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.714918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.715128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.715154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.715360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.715389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.715596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.715625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.715805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.715835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.716040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.716074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.716292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.716321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.716506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.716536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.716732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.716762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.716969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.716997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.717172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.717214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.717364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.717395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.717611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.717639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.717826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.717851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.718061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.718087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.718273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.718301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.718495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.718523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.718693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.718718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.718855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.718892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.719094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.719119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.719320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.719348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.719533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.719558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.719745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.719773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.719971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.719997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.720166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.720212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.720405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.720431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.720604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.720630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.720832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.720860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.721085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.721110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.721252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.721278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.721461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.721489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.721651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.721679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.721833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.721860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.722043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.722068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.722244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.451 [2024-07-15 19:19:50.722270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.451 qpair failed and we were unable to recover it. 00:25:10.451 [2024-07-15 19:19:50.722464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.722492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.722673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.722701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.722900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.722926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.723093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.723119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.723288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.723317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.723529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.723580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.723773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.723811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.723982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.724008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.724187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.724213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.724394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.724422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.724606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.724631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.724854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.724911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.725061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.725087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.725306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.725335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.725492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.725517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.725723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.725751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.725954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.725980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.726111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.726137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.726311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.726337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.726532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.726570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.726717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.726754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.726966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.726991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.727161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.727195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.727382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.727411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.727592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.727620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.727818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.727844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.728062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.728088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.728263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.728293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.728477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.728506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.728670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.728697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.728909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.728935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.729073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.729098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.729317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.729346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.729530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.729558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.729721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.729746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.729921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.452 [2024-07-15 19:19:50.729948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.452 qpair failed and we were unable to recover it. 00:25:10.452 [2024-07-15 19:19:50.730120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.730162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.730380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.730405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.730579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.730609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.730807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.730835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.731029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.731055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.731213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.731241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.731459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.731485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.731674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.731702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.731857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.731891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.732082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.732108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.732283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.732309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.732535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.732564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.732753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.732781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.732982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.733012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.733196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.733221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.733458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.733487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.733678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.733706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.733932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.733962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.734124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.734149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.734325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.734350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.734550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.734577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.734742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.734770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.734997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.735024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.735213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.735244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.735400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.735428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.735624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.735650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.735816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.735841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.736018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.736043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.736232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.736260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.736424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.736452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.736623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.736648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.736845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.736870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.737072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.737098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.737254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.737283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.737475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.737502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.737682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.737709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.737905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.737932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.738103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.738128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.738293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.738318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.738501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.738530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.738712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.738740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.453 [2024-07-15 19:19:50.738951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.453 [2024-07-15 19:19:50.738981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.453 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.739154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.739181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.739420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.739450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.739618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.739643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.739826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.739855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.740023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.740049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.740245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.740273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.740459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.740487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.740676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.740704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.740923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.740949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.741113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.741139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.741297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.741325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.741488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.741517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.741704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.741729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.741898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.741940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.742134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.742160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.742334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.742362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.742561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.742586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.742778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.742806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.742998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.743024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.743215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.743243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.743430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.743455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.743644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.743673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.743855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.743906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.744095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.744121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.744257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.744283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.744503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.744532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.744768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.744978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.745007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.745169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.745208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.745405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.745434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.745629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.745655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.745822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.745847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.746003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.746029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.746201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.746227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.746422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.746451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.746628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.746656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.746838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.746864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.747085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.747114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.747337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.747365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.747549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.747577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.747768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.454 [2024-07-15 19:19:50.747803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.454 qpair failed and we were unable to recover it. 00:25:10.454 [2024-07-15 19:19:50.748005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.748034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.748221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.748249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.748460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.748488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.748663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.748688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.748889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.748918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.749110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.749138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.749317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.749345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.749563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.749589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.749772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.749800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.749990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.750019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.750170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.750207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.750421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.750447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.750645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.750671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.750832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.750860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.751023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.751052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.751267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.751293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.751514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.751543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.751758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.751798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.751991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.752018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.752181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.752206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.752441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.752470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.752627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.752655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.752827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.752855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.753075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.753100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.753295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.753324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.753534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.753562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.753761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.753786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.753969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.753995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.754163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.754195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.754380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.754413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.754627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.754655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.754847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.755042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.755071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.755261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.755290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.755501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.755528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.755689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.755714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.755869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.755907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.756086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.756114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.756281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.756309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.756488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.756514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.756662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.756699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.455 [2024-07-15 19:19:50.756827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.455 [2024-07-15 19:19:50.756853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.455 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.757063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.757091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.757300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.757325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.757491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.757530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.757738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.757766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.757961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.757990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.758186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.758212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.758400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.758428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.758612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.758640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.758794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.758822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.759011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.759037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.759220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.759249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.759401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.759429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.759585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.759613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.759819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.759851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.760039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.760065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.760253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.760278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.760438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.760489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.760662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.760687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.760906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.760935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.761123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.761152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.761345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.761373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.761548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.761573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.761758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.761787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.762004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.762034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.762213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.762241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.762474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.762499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.762728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.762757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.762968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.762997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.763218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.763243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.763410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.763436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.763622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.763650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.763804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.763832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.764020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.764049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.764214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.764240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.456 [2024-07-15 19:19:50.764422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.456 [2024-07-15 19:19:50.764450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.456 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.764610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.764638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.764849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.764885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.765059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.765085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.765267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.765293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.765481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.765510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.765699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.765727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.765890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.765926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.766087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.766117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.766319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.766347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.766539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.766568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.766728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.766755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.766925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.766969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.767150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.767179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.767341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.767370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.767531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.767557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.767744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.767773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.767996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.768022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.768158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.768202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.768353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.768379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.768594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.768627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.768844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.768869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.769065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.769094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.769275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.769300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.769493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.769522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.769677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.769705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.769850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.769883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.770051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.770076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.770215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.770257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.770411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.770440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.770617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.770646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.770797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.770832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.771045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.771074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.771282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.771310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.771498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.771527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.771687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.771714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.771904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.771930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.772113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.772141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.772442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.772495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.772709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.772734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.772975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.773004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.773159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.457 [2024-07-15 19:19:50.773187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.457 qpair failed and we were unable to recover it. 00:25:10.457 [2024-07-15 19:19:50.773360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.773388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.773542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.773567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.773747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.773776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.773977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.774006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.774164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.774191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.774393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.774422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.774611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.774639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.774847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.774880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.775087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.775114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.775306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.775331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.775521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.775548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.775726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.775754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.775938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.775967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.776163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.776194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.776382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.776409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.776630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.776658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.776814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.776843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.777000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.777025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.777166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.777207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.777394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.777422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.777682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.777742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.777962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.777988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.778192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.778220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.778408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.778436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.778735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.778794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.779018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.779044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.779237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.779265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.779454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.779481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.779772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.779828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.780037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.780063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.780233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.780265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.780451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.780479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.780721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.780769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.780977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.781004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.781216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.781244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.781403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.781431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.781584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.781612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.781809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.781834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.782015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.782041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.782261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.782289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.782555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.458 [2024-07-15 19:19:50.782583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.458 qpair failed and we were unable to recover it. 00:25:10.458 [2024-07-15 19:19:50.782771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.782797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.782998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.783027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.783232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.783267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.783535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.783594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.783777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.783802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.784000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.784033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.784250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.784279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.784513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.784568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.784751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.784776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.784929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.784957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.785137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.785165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.785425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.785450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.785619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.785645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.785831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.785858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.786017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.786045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.786313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.786369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.786535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.786560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.786762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.786789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.786962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.786990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.787194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.787223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.787405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.787429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.787644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.787672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.787861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.787894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.788083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.788111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.788309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.788335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.788520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.788548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.788734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.788762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.788961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.788991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.789149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.789175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.789388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.789416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.789560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.789588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.789803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.789828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.790001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.790027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.790213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.790241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.790429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.790454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.790617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.790642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.790808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.790835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.791032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.791061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.791251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.791280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.791462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.791490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.791682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.791706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.459 [2024-07-15 19:19:50.791883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.459 [2024-07-15 19:19:50.791912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.459 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.792088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.792116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.792350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.792378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.792538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.792563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.792711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.792737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.792949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.792978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.793264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.793315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.793496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.793521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.793691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.793721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.793885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.793911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.794084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.794111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.794299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.794324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.794496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.794521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.794656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.794681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.794898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.794927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.795093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.795118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.795287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.795314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.795537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.795561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.795730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.795755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.795901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.795927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.796066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.796091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.796254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.796297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.796485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.796510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.796680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.796705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.796917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.796943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.797102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.797143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.797332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.797373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.797524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.797550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.797748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.797776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.797953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.797981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.798174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.798199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.798377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.798402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.798595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.798626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.798785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.798812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.798989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.799017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.799206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.799232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.799422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.460 [2024-07-15 19:19:50.799451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.460 qpair failed and we were unable to recover it. 00:25:10.460 [2024-07-15 19:19:50.799601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.799630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.799815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.799840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.800010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.800036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.800221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.800250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.800441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.800469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.800703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.800752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.800950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.800976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.801169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.801197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.801375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.801403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.801619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.801653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.801824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.801849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.802042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.802071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.802229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.802257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.802435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.802481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.802652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.802677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.802874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.802904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.803079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.803106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.803320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.803366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.803581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.803606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.803791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.803819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.804031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.804061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.804265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.804311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.804482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.804507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.804701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.804729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.804924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.804950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.805087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.805129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.805289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.805320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.805505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.805533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.805737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.805762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.805917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.805959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.806148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.806173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.806362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.806391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.806579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.806607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.806780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.806808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.807642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.807676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.807865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.807901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.808093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.808122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.808319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.808365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.808527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.808553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.808717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.808753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.808942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.808980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.461 [2024-07-15 19:19:50.809167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.461 [2024-07-15 19:19:50.809195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.461 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.809393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.809420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.809571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.809599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.809785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.809813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.810000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.810025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.810170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.810196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.810363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.810391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.810587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.810613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.810777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.810801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.810965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.810992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.811164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.811191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.811357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.811382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.811550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.811575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.811736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.811948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.811977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.812169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.812194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.812374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.812402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.812595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.812620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.812808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.812836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.813025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.813051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.813221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.813270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.813489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.813515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.813707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.813739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.813924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.813954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.814116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.814144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.814338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.814363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.814530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.814555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.814727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.814755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.814943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.814972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.815139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.815164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.815304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.815329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.815488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.815513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.815709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.815737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.815932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.815958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.816141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.816169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.816323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.816351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.816607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.816653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.816850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.816894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.817102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.817128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.817259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.817285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.817537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.817584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.817743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.462 [2024-07-15 19:19:50.817768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.462 qpair failed and we were unable to recover it. 00:25:10.462 [2024-07-15 19:19:50.817914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.817956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.818165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.818193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.818397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.818425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.818597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.818637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.818821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.818848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.819042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.819068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.819209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.819250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.819412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.819436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.819626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.819654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.819837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.819865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.820033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.820062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.820262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.820287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.820474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.820503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.820688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.820716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.820927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.820956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.821146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.821171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.821352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.821380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.821566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.821605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.821765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.821793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.822009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.822035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.822246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.822274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.822456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.822488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.822652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.822680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.822873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.822903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.823054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.823080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.823244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.823269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.823493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.823538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.823734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.823759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.823949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.823978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.824160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.824187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.824435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.824481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.824666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.824691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.824918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.824946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.825130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.825158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.825390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.825440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.825627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.825652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.825854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.825895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.826103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.826131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.826352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.826377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.826517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.826549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.826763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.826791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.463 [2024-07-15 19:19:50.826947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.463 [2024-07-15 19:19:50.826975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.463 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.827221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.827272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.827462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.827487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.827679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.827707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.827892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.827920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.828128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.828156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.828331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.828356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.828571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.828603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.828819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.828847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.829014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.829042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.829226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.829251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.829401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.829429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.829575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.829603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.829781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.829809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.830003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.830029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.830223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.830251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.830435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.830463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.830681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.830706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.830850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.830880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.831054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.831079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.831256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.831284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.831457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.831503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.831672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.831697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.831850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.831881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.832057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.832085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.832266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.832311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.832523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.832548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.832705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.832733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.832894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.832923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.464 [2024-07-15 19:19:50.833109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.464 [2024-07-15 19:19:50.833138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.464 qpair failed and we were unable to recover it. 00:25:10.736 [2024-07-15 19:19:50.833299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.736 [2024-07-15 19:19:50.833328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.736 qpair failed and we were unable to recover it. 00:25:10.736 [2024-07-15 19:19:50.833464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.736 [2024-07-15 19:19:50.833506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.736 qpair failed and we were unable to recover it. 00:25:10.736 [2024-07-15 19:19:50.833666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.736 [2024-07-15 19:19:50.833695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.736 qpair failed and we were unable to recover it. 00:25:10.736 [2024-07-15 19:19:50.833858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.736 [2024-07-15 19:19:50.833891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.736 qpair failed and we were unable to recover it. 00:25:10.736 [2024-07-15 19:19:50.834083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.736 [2024-07-15 19:19:50.834108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.736 qpair failed and we were unable to recover it. 00:25:10.736 [2024-07-15 19:19:50.834267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.736 [2024-07-15 19:19:50.834295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.736 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.834478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.834506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.834697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.834722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.834889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.834915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.835109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.835137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.835298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.835326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.835514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.835542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.835725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.835750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.835943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.835971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.836184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.836212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.836433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.836461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.836625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.836649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.836804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.836832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.836997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.837026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.837214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.837241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.837401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.837426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.837566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.837591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.837749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.837774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.837988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.838014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.838157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.838184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.838405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.838433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.838599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.838626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.838834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.838862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.839032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.839058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.839244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.839273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.839485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.839513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.839697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.839722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.839919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.839945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.840135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.840163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.840377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.840404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.840605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.840657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.840838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.840863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.841016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.841041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.841181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.841206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.841419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.841466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.841683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.841707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.841873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.841906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.842088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.842114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.842381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.842432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.842628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.842653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.842867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.842904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.843114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.843142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.843315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.843342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.843512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.843538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.843690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.843718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.843904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.843932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.844117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.844145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.844336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.844361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.844499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.844539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.844726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.844753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.844907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.844936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.845098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.845123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.845303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.845330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.845516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.845544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.845757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.845785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.845993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.846019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.846239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.846267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.846449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.846476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.737 qpair failed and we were unable to recover it. 00:25:10.737 [2024-07-15 19:19:50.846763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.737 [2024-07-15 19:19:50.846820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.847008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.847033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.847197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.847225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.847376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.847404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.847563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.847591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.847778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.847803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.847966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.847994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.848200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.848229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.848406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.848451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.848635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.848660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.848847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.848880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.849091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.849119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.849340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.849390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.849615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.849639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.849858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.849891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.850078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.850119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.850391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.850443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.850642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.850667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.850886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.850915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.851139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.851165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.851355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.851402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.851646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.851671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.851855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.851901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.852067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.852099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.852262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.852290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.852512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.852537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.852734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.852761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.852949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.852977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.853172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.853197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.853359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.853384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.853593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.853620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.853833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.853858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.854034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.854060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.854226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.854251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.854470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.854497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.854686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.854719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.854869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.854905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.855097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.855123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.855322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.855350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.855501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.855529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.855747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.855797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.855992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.856017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.856232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.856259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.856468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.856496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.856667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.856712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.856933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.856958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.857122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.857150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.857349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.857373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.857572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.857597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.857796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.857821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.858013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.858045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.858254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.858282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.858560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.858616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.858795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.858820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.859022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.859050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.859257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.859285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.859553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.859600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.859793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.859818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.859975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.738 [2024-07-15 19:19:50.860001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.738 qpair failed and we were unable to recover it. 00:25:10.738 [2024-07-15 19:19:50.860160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.860188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.860378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.860403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.860539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.860564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.860746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.860774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.860957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.860986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.861146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.861174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.861361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.861386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.861528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.861554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.861737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.861765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.861983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.862011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.862199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.862225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.862411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.862438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.862600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.862628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.862781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.862811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.862987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.863013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.863187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.863212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.863390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.863415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.863602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.863630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.863791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.863817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.864012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.864041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.864258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.864283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.864457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.864482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.864614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.864639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.864783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.864828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.865022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.865048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.865270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.865298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.865460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.865485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.865629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.865654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.865818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.865846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.866007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.866036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.866205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.866230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.866381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.866409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.866593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.866624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.866785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.866815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.867004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.867030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.867210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.867238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.867408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.867437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.867640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.867692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.867920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.867946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.868117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.868145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.868331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.868359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.868566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.868612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.868774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.868799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.868972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.868997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.869138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.869163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.869399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.869447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.869637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.869662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.869804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.869829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.870017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.870043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.870243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.870271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.870486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.870511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.870734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.870783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.870949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.870977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.871162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.739 [2024-07-15 19:19:50.871190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.739 qpair failed and we were unable to recover it. 00:25:10.739 [2024-07-15 19:19:50.871360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.871384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.871534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.871559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.871737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.871765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.871924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.871953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.872117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.872143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.872329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.872357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.872525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.872553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.872733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.872761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.872928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.872953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.873097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.873142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.873349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.873377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.873583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.873635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.873796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.873821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.873996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.874024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.874217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.874242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.874377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.874402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.874564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.874589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.874777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.874805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.874999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.875025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.875238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.875266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.875434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.875459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.875594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.875637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.875820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.875848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.875999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.876027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.876187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.876213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.876432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.876461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.876674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.876727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.876947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.876975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.877168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.877193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.877329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.877354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.877522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.877565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.877724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.877752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.877958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.877984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.878160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.878185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.878349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.878374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.878574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.878618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.878806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.878831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.879026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.879054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.879203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.879231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.879386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.879414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.879603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.879629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.879822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.879847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.880073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.880112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.880340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.880369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.880530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.880566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.880716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.880742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.880910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.880942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.881120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.881145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.881334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.881362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.881503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.881529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.881793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.881842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.882043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.882068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.882269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.882296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.882453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.882480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.882701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.882748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.882946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.882971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.883140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.883181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.883387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.883417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.740 qpair failed and we were unable to recover it. 00:25:10.740 [2024-07-15 19:19:50.883617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.740 [2024-07-15 19:19:50.883667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.883827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.883856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.884068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.884093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.884264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.884304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.884524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.884569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.884729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.884756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.884930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.884965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.885141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.885166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.885347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.885384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.885602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.885631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.885794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.885818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.885986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.886021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.886165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.886191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.886429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.886478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.886659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.886685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.886897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.886941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.887083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.887107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.887283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.887310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.887511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.887538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.887734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.887764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.887959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.887985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.888167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.888209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.888396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.888421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.888580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.888609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.888789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.888816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.889015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.889043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.889183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.889209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.889389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.889417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.889627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.889678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.889841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.889869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.890046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.890072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.890292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.890320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.890507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.890537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.890708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.890737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.890904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.890938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.891124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.891166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.891318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.891488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.891516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.891696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.891721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.891922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.891964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.892145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.892172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.892394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.892446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.892676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.892702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.892889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.892920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.893103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.893130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.893382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.893430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.893594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.893619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.893763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.893788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.893924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.893950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.894118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.894160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.894321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.894346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.894484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.894528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.894712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.894741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.894892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.894920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.895104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.895128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.895317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.895345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.895569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.895619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.895805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.741 [2024-07-15 19:19:50.895833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.741 qpair failed and we were unable to recover it. 00:25:10.741 [2024-07-15 19:19:50.896014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.896042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.896251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.896279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.896433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.896461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.896639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.896667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.896891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.896942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.897139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.897181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.897355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.897383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.897575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.897602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.897747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.897777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.897935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.897961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.898107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.898137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.898341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.898369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.898539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.898565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.898782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.898815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.898980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.899008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.899226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.899279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.899476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.899502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.899660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.899689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.899857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.899893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.900091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.900124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.900323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.900348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.900503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.900531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.900691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.900720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.900883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.900911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.901104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.901131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.901273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.901298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.901469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.901495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.901655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.901684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.901883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.901911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.902119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.902149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.902307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.902335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.902565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.902614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.902778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.902803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.903000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.903031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.903217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.903244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.903487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.903533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.903718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.903743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.903898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.903924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.904085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.904110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.904331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.904384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.904631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.904658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.904883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.904909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.905081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.905108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.905282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.905312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.905551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.905583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.905779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.905807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.905982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.906008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.906152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.906192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.906362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.906389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.906542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.906573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.906763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.906796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.742 [2024-07-15 19:19:50.907008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.742 [2024-07-15 19:19:50.907035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.742 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.907237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.907273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.907475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.907514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.907696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.907724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.907911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.907956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.908127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.908151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.908321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.908359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.908581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.908609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.908786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.908827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.909065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.909091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.909258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.909295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.909461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.909489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.909647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.909675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.909870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.909900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.910101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.910131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.910297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.910332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.910478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.910502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.910650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.910681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.910888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.910917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.911075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.911104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.911332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.911382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.911575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.911611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.911805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.911834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.912061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.912088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.912337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.912387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.912573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.912608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.912832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.912861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.913030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.913058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.913265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.913317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.913503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.913528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.913716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.913744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.913899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.913929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.914091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.914119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.914307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.914340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.914499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.914528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.914714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.914742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.914914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.914944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.915126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.915152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.915340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.915379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.915577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.915606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.915748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.915776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.915991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.916017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.916188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.916213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.916356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.916381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.916545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.916600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.916810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.916835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.917036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.917077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.917244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.917274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.917504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.917549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.917746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.917771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.917953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.917978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.918162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.918196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.918408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.918458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.918680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.918705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.918850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.918882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.919089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.919129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.919373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.919427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.919632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.919657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.919845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.919872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.920102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.920128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.920412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.920462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.920655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.920680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.920841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.920869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.921088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.921117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.921317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.921347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.921501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.921526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.921724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.921753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.921912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.921941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.922094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.922124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.922312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.922337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.922541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.922572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.922748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.922775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.922969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.922995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.923169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.923194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.923343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.923368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.923514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.923544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.923689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.923715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.743 qpair failed and we were unable to recover it. 00:25:10.743 [2024-07-15 19:19:50.923902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.743 [2024-07-15 19:19:50.923928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.924071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.924096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.924262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.924293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.924462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.924486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.924649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.924674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.924840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.924866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.925049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.925075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.925221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.925247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.925439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.925465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.925640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.925669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.925825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.925855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.926033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.926059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.926216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.926250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.926434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.926459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.926684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.926738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.926905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.926935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.927128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.927162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.927393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.927419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.927642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.927696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.927883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.927913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.928097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.928124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.928292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.928316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.928489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.928534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.928694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.928731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.928966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.928993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.929167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.929193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.929360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.929387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.929546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.929575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.929731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.929760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.929928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.929954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.930133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.930158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.930345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.930379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.930582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.930611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.930772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.930797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.930973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.931003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.931184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.931211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.931472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.931523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.931682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.931707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.931855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.931884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.932051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.932076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.932242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.932271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.932455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.932481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.932699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.932733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.932931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.932957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.933131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.933156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.933294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.933320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.933556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.933604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.933768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.933796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.933984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.934013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.934178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.934203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.934425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.934452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.934631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.934659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.934842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.934869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.935044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.935070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.935284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.935312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.935498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.935527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.935697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.935724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.935897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.935922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.936086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.936116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.936315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.936343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.936580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.936625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.936799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.936824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.936984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.937028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.937214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.937241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.937437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.937462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.937611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.937635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.937801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.937829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.938033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.938059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.744 [2024-07-15 19:19:50.938265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.744 [2024-07-15 19:19:50.938311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.744 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.938473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.938499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.938644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.938670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.938887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.938915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.939123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.939150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.939332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.939356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.939596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.939654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.939849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.939875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.940064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.940089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.940273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.940299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.940457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.940498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.940679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.940706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.940868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.940903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.941074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.941100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.941288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.941321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.941486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.941513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.941703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.941754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.941924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.941949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.942097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.942123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.942264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.942289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.942511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.942559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.942768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.942798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.942966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.942992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.943134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.943175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.943369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.943397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.943587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.943612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.943768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.943796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.943967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.943994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.944206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.944256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.944466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.944490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.944674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.944702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.944860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.944894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.945097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.945122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.945275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.945300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.945514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.945541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.945751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.945791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.946005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.946031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.946199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.946223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.946365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.946390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.946582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.946607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.946770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.946797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.946966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.946993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.947161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.947189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.947377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.947404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.947587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.947634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.947855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.947889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.948056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.948084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.948256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.948284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.948474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.948523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.948705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.948730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.948924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.948954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.949123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.949151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.949364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.949391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.949578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.949603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.949820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.949853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.950045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.950071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.950273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.950301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.950511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.950536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.950749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.950777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.950934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.950963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.951171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.951198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.745 [2024-07-15 19:19:50.951378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.745 [2024-07-15 19:19:50.951403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.745 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.951540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.951565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.951760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.951785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.951989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.952019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.952211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.952235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.952425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.952450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.952641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.952668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.952887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.952916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.953098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.953124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.953296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.953324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.953520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.953548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.953729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.953757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.953946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.953971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.954161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.954189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.954376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.954404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.954628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.954653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.954828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.954856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.955072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.955097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.955294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.955321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.955564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.955612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.955800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.955829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.956017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.956046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.956264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.956289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.956457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.956481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.956623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.956647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.956800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.956829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.957007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.957032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.957196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.957235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.957409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.957434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.957575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.957618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.957802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.957831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.958024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.958052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.958217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.958241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.958434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.958459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.958659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.958688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.958872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.958907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.959129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.959155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.959355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.959383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.959568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.959596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.959776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.959802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.960008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.960033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.960204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.960229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.960399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.960428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.960642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.960690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.960902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.960928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.961099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.961127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.961333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.961360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.961566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.961595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.961784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.961808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.962007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.962035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.962226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.962254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.962442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.962470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.962665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.962690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.962853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.962885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.963068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.963095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.963324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.963372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.963550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.963576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.963771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.963796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.963961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.963990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.964143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.964170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.964337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.964367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.964554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.964581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.964794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.964822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.965003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.965031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.965214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.965238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.965449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.965476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.965655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.965682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.965844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.965871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.966064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.966089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.966230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.966255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.966456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.966481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.966733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.966779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.966997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.967023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.746 [2024-07-15 19:19:50.967212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.746 [2024-07-15 19:19:50.967240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.746 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.967399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.967427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.967680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.967726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.967893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.967919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.968106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.968133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.968342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.968369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.968607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.968653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.968867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.968898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.969057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.969085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.969266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.969294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.969539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.969585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.969811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.969836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.970015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.970041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.970183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.970209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.970467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.970514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.970704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.970730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.970874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.970904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.971070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.971114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.971383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.971433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.971619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.971644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.971860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.971901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.972053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.972081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.972269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.972298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.972512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.972537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.972684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.972712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.972864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.972901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.973122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.973170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.973333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.973362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.973549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.973577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.973763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.973791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.973966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.974014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.974204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.974229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.974424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.974452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.974637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.974664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.974872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.974906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.975083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.975108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.975279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.975306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.975525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.975552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.975747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.975773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.975962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.975987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.976177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.976205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.976421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.976449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.976688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.976734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.976923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.976949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.977136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.977163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.977351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.977379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.977653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.977701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.977887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.977912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.978104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.978132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.978356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.978384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.978660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.978708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.978881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.978908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.979124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.979152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.979326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.979354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.979614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.979663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.979900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.979926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.980123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.980151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.980361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.980389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.980605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.980663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.980884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.980910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.981149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.981174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.981374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.981399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.981602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.981655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.981919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.981945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.982135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.982176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.982355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.982383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.982554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.982606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.982796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.982825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.983005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.983034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.983244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.747 [2024-07-15 19:19:50.983271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.747 qpair failed and we were unable to recover it. 00:25:10.747 [2024-07-15 19:19:50.983496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.983525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.983691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.983716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.983933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.983961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.984129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.984158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.984394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.984439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.984637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.984663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.984852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.984886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.985107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.985135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.985331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.985357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.985522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.985547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.985764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.985792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.986015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.986043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.986346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.986396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.986580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.986605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.986765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.986793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.986941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.986970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.987160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.987188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.987400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.987425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.987614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.987641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.987824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.987851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.988057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.988085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.988257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.988283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.988427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.988453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.988620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.988647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.988881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.988907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.989096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.989120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.989289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.989314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.989497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.989524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.989817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.989873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.990051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.990077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.990255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.990283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.990477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.990503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.990733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.990780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.990993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.991019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.991212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.991241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.991440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.991465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.991663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.991687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.991893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.991944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.992087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.992114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.992361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.992387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.992556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.992581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.992776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.992801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.992978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.993005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.993221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.993246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.993390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.993413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.993606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.993631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.993822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.993849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.994052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.994076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.994221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.994244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.994390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.994414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.994577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.994602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.994743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.994767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.994924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.994949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.995089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.995116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.995304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.995332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.995513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.995540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.995725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.995754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.995922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.995948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.996116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.996141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.996352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.996380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.996617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.996663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.996821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.996846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.997027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.997053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.997200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.997224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.997417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.997445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.997607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.997632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.997764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.997790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.997985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.998013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.998216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.998269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.998440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.998465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.998603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.998629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.998846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.998874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.999092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.999117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.999288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.999312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.999454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.999479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.999650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.999675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:50.999836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:50.999866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:51.000070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.748 [2024-07-15 19:19:51.000099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.748 qpair failed and we were unable to recover it. 00:25:10.748 [2024-07-15 19:19:51.000303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.000328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.000513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.000540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.000724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.000752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.000937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.000963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.001123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.001148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.001308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.001335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.001486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.001514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.001680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.001705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.001936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.001965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.002129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.002156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.002330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.002357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.002537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.002562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.002755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.002783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.002945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.002974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.003124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.003153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.003348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.003374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.003534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.003718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.003746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.003900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.003932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.004096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.004122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.004265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.004290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.004459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.004484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.004675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.004702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.004888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.004915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.005102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.005130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.005312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.005340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.005528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.005576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.005773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.005799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.006022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.006049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.006231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.006259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.006436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.006489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.006726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.006752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.006938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.006967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.007150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.007178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.007436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.007464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.007632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.007657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.007826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.007869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.008060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.008087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.008247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.008275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.008490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.008520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.008710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.008738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.008935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.008965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.009194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.009228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.009465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.009490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.009642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.009666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.009798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.009823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.010015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.010044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.010256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.010281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.010476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.010504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.010653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.010680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.010895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.010929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.011093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.011117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.011298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.011326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.011494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.011522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.011704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.011732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.011896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.011923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.012088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.012116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.012322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.012349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.012554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.012582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.012769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.012795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.012993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.013021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.013182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.013211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.013482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.013531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.013727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.013751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.013944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.013972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.014158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.014186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.014501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.014552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.014770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.014795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.014944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.014970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.015142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.015168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.015428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.015480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.015671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.015699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.015865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.015896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.016089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.016113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.016423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.016473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.016665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.016689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.749 [2024-07-15 19:19:51.016832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.749 [2024-07-15 19:19:51.016872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.749 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.017099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.017125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.017305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.017331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.017568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.017597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.017777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.017805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.017987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.018018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.018227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.018255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.018448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.018472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.018663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.018690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.018870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.018903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.019090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.019120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.019309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.019334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.019498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.019522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.019728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.019752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.019919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.019944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.020114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.020141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.020363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.020389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.020606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.020634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.020812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.020840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.021035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.021060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.021223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.021250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.021436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.021463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.021702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.021751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.021940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.021966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.022137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.022162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.022345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.022372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.022561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.022585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.022754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.022779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.022938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.022967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.023124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.023152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.023435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.023487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.023700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.023725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.023919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.023947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.024158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.024186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.024345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.024374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.024566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.024591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.024729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.024755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.024944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.024970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.025224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.025272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.025479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.025504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.025658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.025686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.025893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.025936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.026099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.026123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.026297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.026327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.026506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.026531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.026759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.026784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.026987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.027052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.027248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.027273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.027445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.027470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.027659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.027683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.027892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.027921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.028107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.028133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.028322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.028351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.028505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.028532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.028740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.028767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.028960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.028986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.029155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.029184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.029342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.029370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.029521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.029548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.029742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.029767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.029978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.030008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.030205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.030233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.030390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.030417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.030628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.030653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.030802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.030827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.030987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.031013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.031148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.031173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.031336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.031362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.031531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.031558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.031709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.031737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.031931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.031960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.750 [2024-07-15 19:19:51.032152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.750 [2024-07-15 19:19:51.032176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.750 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.032326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.032351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.032520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.032545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.032744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.032768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.032907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.032932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.033104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.033129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.033297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.033326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.033513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.033540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.033708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.033733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.033893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.033918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.034140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.034167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.034381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.034409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.034625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.034655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.034821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.034849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.035047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.035073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.035244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.035270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.035445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.035470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.035611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.035653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.035832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.035860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.036095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.036120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.036314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.036339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.036495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.036522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.036708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.036735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.036997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.037026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.037213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.037239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.037406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.037433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.037633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.037658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.037839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.037864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.038071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.038097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.038296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.038323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.038530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.038558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.038746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.038771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.038934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.038960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.039144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.039171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.039352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.039382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.039628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.039676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.039858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.039889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.040059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.040083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.040286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.040310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.040455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.040479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.040670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.040695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.040904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.040933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.041140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.041169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.041433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.041481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.041669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.041694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.041862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.041895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.042040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.042081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.042376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.042434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.042624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.042649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.042810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.042838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.043063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.043089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.043283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.043333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.043522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.043553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.043772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.043800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.044004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.044030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.044217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.044244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.044427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.044452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.044615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.044644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.044833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.044862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.045026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.045054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.045220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.045428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.045455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.045613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.045642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.045824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.045852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.046024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.046048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.046237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.046264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.046427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.046456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.046702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.046753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.046922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.046948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.047140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.047165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.047338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.047365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.047575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.047602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.047774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.047798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.047964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.047990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.048155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.048183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.048451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.048499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.048692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.048717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.048888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.048912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 19:19:51.049076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.751 [2024-07-15 19:19:51.049104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.049300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.049326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.049469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.049494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.049633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.049657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.049834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.049861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.050046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.050075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.050263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.050288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.050459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.050484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.050668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.050696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.050904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.050932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.051148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.051172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.051355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.051383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.051572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.051600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.051777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.051804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.051988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.052018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.052184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.052212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.052409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.052437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.052668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.052717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.052907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.052933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.053079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.053105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.053314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.053342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.053568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.053592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.053778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.053806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.053995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.054020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.054206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.054234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.054453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.054510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.054724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.054750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.054888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.054913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.055088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.055113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.055305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.055333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.055494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.055520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.055681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.055706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.055891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.055919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.056074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.056101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.056289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.056314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.056474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.056501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.056693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.056718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.056914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.056944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.057127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.057152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.057338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.057366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.057523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.057550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.057714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.057743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.057927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.057953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.058118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.058146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.058329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.058357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.058541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.058568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.058758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.058782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.059002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.059031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.059209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.059237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.059484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.059532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.059743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.059768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.059981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.060010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.060193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.060221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.060425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.060476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.060672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.060702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.060872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.060909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.061090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.061118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.061369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.061421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.061581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.061608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.061799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.061826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.061997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.062022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.062232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.062260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.062453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.062478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.062674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.062702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.062887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.062917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.063109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.063134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.063307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.063331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.063524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.063552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.063736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.063764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.063962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.063989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 19:19:51.064158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.752 [2024-07-15 19:19:51.064183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.064350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.064375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.064560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.064587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.064765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.064793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.064972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.064997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.065138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.065164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.065332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.065357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.065561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.065615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.065806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.065831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.066018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.066047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.066233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.066261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.066521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.066570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.066756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.066782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.066973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.067001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.067220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.067247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.067475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.067505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.067738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.067966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.067995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.068201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.068229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.068531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.068583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.068791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.068816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.069014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.069042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.069227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.069254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.069498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.069549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.069760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.069785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.069961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.069986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.070178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.070206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.070424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.070483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.070673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.070698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.070864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.070900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.071116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.071143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.071428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.071478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.071692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.071717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.071905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.071934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.072149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.072175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.072338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.072368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.072583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.072609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.072770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.072798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.073026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.073051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.073244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.073272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.073461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.073485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.073651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.073694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.073853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.073887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.074097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.074122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.074288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.074313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.074494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.074522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.074698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.074726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.074925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.074951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.075150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.075175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.075369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.075397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.075568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.075596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.075748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.075781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.075973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.075999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.076186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.076214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.076373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.076401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.076588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.076613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.076744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.076769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.076979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.077007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.077187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.077215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.077463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.077511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.077702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.077727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.077919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.077948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.078168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.078196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.078456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.078508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.078706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.078731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.078926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.078955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.079136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.079164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.079377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.079426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.079646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.079671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.079857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.079902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.080089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.080119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.080365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.080393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.080608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.080633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.080790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.080819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.080981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.081009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.081190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.753 [2024-07-15 19:19:51.081218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.753 qpair failed and we were unable to recover it. 00:25:10.753 [2024-07-15 19:19:51.081405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.081430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.081619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.081647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.081835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.081862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.082087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.082112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.082274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.082299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.082485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.082513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.082695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.082723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.083008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.083056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.083271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.083296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.083505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.083533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.083712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.083741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.083927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.083956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.084147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.084171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.084334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.084360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.084548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.084576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.084736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.084769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.084993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.085019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.085187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.085215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.085365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.085392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.085577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.085637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.085806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.085832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.086018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.086045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.086239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.086264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.086516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.086566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.086751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.086776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.086963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.086991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.087155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.087183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.087364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.087421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.087623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.087648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.087840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.087867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.088042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.088070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.088264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.088289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.088480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.088506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.088666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.088694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.088902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.088931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.089109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.089134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.089324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.089349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.089512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.089540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.089744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.089772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.089939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.089964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.090131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.090155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.090346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.090374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.090561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.090589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.090744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.090772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.090952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.090977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.091168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.091195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.091377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.091405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.091665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.091713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.091905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.091931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.092095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.092122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.092306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.092335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.092576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.092627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.092812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.092837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.093003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.093031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.093186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.093216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.093429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.093461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.093656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.093682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.093901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.093927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.094071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.094114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.094338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.094389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.094583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.094609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.094795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.094822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.094988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.095013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.095185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.095227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.095440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.095465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.095626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.095652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.095861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.095902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.096092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.754 [2024-07-15 19:19:51.096119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.754 qpair failed and we were unable to recover it. 00:25:10.754 [2024-07-15 19:19:51.096316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.096341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.096506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.096534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.096713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.096743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.096930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.096958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.097145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.097169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.097358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.097385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.097571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.097599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.097783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.097811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.097973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.097998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.098187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.098214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.098393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.098420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.098653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.098704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.098868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.098899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.099061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.099089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.099274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.099302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.099578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.099636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.099833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.099859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.100041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.100066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.100239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.100267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.100421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.100451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.100658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.100686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.100887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.100945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.101104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.101131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.101322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.101351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.101590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.101641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.101951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.101977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.102144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.102169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.102384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.102417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.102645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.102670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.102840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.102865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.103013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.103038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.103174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.103199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.103342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.103367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.103536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.103561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.103730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.103755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.103945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.103973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.104152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.104180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.104360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.104388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.104600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.104626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.104810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.104839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.105002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.105028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.105180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.105205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.105351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.105376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.105591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.105619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.105804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.105832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.106007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.106034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.106175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.106200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.106414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.106442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.106743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.106797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.106988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.107016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.107199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.107225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.107446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.107474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.107785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.107834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.108052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.108079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.108264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.108290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.108457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.108486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.108673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.108724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.108909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.108938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.109105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.109131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.109268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.109294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.109475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.109503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.109659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.109687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.109921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.109948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.110096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.110121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.110290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.110316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.110517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.110573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.110754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.110779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.110926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.110952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.111119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.111167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.111354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.111382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.111573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.111599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.111812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.111840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.112023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.112049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.112242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.112267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.112439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.112464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.112640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.112668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.112831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.755 [2024-07-15 19:19:51.112859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.755 qpair failed and we were unable to recover it. 00:25:10.755 [2024-07-15 19:19:51.113034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.113059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.113224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.113250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.113441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.113469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.113651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.113680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.113872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.113914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.114098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.114124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.114294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.114323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.114481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.114509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.114721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.114749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.114953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.114979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.115142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.115170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.115331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.115359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.115542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.115569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.115736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.115761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.115902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.115928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.116116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.116144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.116298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.116326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.116514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.116539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.116726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.116759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.116974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.117003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.117157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.117187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.117351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.117376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.117519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.117559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.117773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.117801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.118002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.118028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.118167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.118193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.118348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.118376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.118591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.118619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.118799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.118827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.119000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.119026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.119204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.119232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.119438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.119466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.119594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc200e0 is same with the state(5) to be set 00:25:10.756 [2024-07-15 19:19:51.119829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.119867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.120031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.120059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.120250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.120277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.120549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.120601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.120785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.120813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.120973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.121006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96b0000b90 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.121152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.121195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.121407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.121435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.121594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.121622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.121807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.121834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.122052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.122078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.122242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.122270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.122561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.122608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.122763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.122791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.122971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.122997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.123145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.123170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.123336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.123364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.123553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.123581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.123786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.123813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.123973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.123999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.124143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.124186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.124367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.124392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.124585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.124613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.124764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.124791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.124982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.125008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.125182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.125207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.125397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.125425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.125645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.125673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.125854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.125891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.126082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.126107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.126273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.126298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.126463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.126492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.126783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.126840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.127031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.127057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.127226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.127254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.127441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.756 [2024-07-15 19:19:51.127468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.756 qpair failed and we were unable to recover it. 00:25:10.756 [2024-07-15 19:19:51.127641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.127668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.127852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.127887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.128051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.128076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.128239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.128264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.128428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.128460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.128655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.128709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.128918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.128945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.129086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.129111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.129256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.129281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.129449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.129477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.129630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.129658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.129836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.129863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.130048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.130073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.130267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.130295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.130458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.130486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.130661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.130689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.130844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.130872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.131041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.131066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.131209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.131234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.131418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.131446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.131633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.131661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.131874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.131905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.132070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.132095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.132286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.132314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.132616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.132665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.132850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.132894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.133060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.133085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.133227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.133253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.133423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.133448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.133638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.133663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.133837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.133862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.134044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.134078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.134255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.134283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.134446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.134471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.134662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.134689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.134874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.134908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.135073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.135099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.135268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.135293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.135457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.135485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.135675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.135700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.135850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.135881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.136076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.136103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.136317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.136342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.136526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.136554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.136769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.136798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.136999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.137025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.137188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.137216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.137404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.137432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.137618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.137643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.137830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.137858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.138024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.138052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.138245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.138272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.138431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.138460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.138631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.138658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.138842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.138867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.139055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.139083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.757 [2024-07-15 19:19:51.139233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.757 [2024-07-15 19:19:51.139261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.757 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.139480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.139505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.139670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.139698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.139855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.139890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.140073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.140098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.140289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.140317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.140507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.140533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.140665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.140690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.140830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.140873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.141059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.141087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.141256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.141281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.141446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.141473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.141631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.141658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.141852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.141884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.142044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.142072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.142220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.142248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.142461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.142492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.142686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.142715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.142872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.142906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.143093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.143118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.143270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.143298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.143512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.143540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.143724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.143749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.143912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.143940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.144117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.144145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.144335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.144360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.144553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.144581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.144732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.144760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.144939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.144966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.145129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.145157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.145345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.145373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.145537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.145562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.145724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.145749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.145935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.145961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.146127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.146152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.146340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.146368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.146550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.146578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.146793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.146818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.147010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.147039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.147191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.147219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.147405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.147430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.147610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.147639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.147819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.147847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.148046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.148075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.148269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.148294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.148491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.148519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.148705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.148730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.148917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.148946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.149157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.149185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.149344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.149369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.149513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.149554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.149736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.149764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.149926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.149952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.150162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.150190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.150371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.150399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.150566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.150590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.150786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.150811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.151010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.151037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.151206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.151232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.151393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.151421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.151608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.151636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.151851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.151883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.152024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.152049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.152217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.152244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.152388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.152413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.152577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.152606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.152794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.152823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.153014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.153040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.153230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.153259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.153442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.153470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.153658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.153683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.153833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.153858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.154046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.154075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.154245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.154270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.154440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.154483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.154671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.154698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.154888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.154914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.155056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.155081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:10.758 [2024-07-15 19:19:51.155247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.758 [2024-07-15 19:19:51.155290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:10.758 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.155473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.155500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.155653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.155682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.155839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.156057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.156084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.156246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.156274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.156448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.156480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.156666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.156691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.156882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.156911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.157104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.157132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.157310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.157335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.157503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.157530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.157684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.157711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.157909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.157935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.158149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.158176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.158339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.158367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.158547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.158572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.158791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.158818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.159004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.159032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.159196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.159221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.159372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.159397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.159565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.159590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.159759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.159787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.159970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.159995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.160208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.160236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.160395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.160420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.160618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.160645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.160821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.160850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.161032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.161058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.161261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.161289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.161482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.161510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.161672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.161697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.161892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.161921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.162072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.162100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.162259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.162284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.162423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.162465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.162652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.040 [2024-07-15 19:19:51.162680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.040 qpair failed and we were unable to recover it. 00:25:11.040 [2024-07-15 19:19:51.162834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.162859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.163004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.163045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.163230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.163258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.163417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.163442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.163631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.163659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.163863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.163897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.164056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.164081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.164244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.164272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.164419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.164447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.164645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.164670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.164821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.164846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.164999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.165025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.165169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.165194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.165375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.165403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.165586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.165614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.165803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.165828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.165979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.166008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.166171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.166200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.166365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.166391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.166573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.166600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.166785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.166814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.166973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.166999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.167160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.167210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.167387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.167416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.167611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.167636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.167849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.167882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.168077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.168105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.168293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.168318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.168557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.168585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.168767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.168795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.168980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.169006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.169194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.169222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.169403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.169430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.169620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.169645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.169812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.169839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.170049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.170075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.170268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.170293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.170462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.170496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.170678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.170706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.170869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.170902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.171110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.171138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.171321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.171350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.171540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.171566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.171782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.171810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.172005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.172034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.172220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.172245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.172425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.172453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.041 qpair failed and we were unable to recover it. 00:25:11.041 [2024-07-15 19:19:51.172669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.041 [2024-07-15 19:19:51.172697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.172889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.172915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.173073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.173100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.173280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.173307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.173502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.173528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.173714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.173742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.173921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.173950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.174114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.174139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.174329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.174357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.174580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.174605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.174777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.174807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.174998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.175023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.175171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.175195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.175364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.175389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.175588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.175616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.175801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.175829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.176022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.176049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.176209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.176237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.176435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.176461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.176604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.176629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.176818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.176845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.177047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.177073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.177217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.177242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.177407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.177435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.177594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.177622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.177781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.177806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.177988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.178017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.178211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.178239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.178434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.178459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.178655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.178683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.178863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.178897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.179083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.179112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.179321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.179349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.179525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.179553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.179741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.179766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.179940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.179966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.180156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.180183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.180341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.180367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.180507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.180549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.180760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.180788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.180952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.180977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.181112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.181138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.181306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.181333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.181545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.181570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.181733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.181761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.181971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.182000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.182263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.182289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.182516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.182544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.182723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.182752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.182937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.182963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.183104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.183129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.183292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.183335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.042 [2024-07-15 19:19:51.183547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.042 [2024-07-15 19:19:51.183572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.042 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.183763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.183791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.183972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.184000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.184180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.184205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.184364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.184392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.184579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.184607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.184768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.184800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.184993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.185023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.185230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.185258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.185419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.185444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.185668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.185696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.185853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.185891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.186109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.186134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.186299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.186327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.186543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.186571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.186761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.186786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.186974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.187002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.187185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.187213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.187371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.187397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.187587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.187615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.187842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.187867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.188043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.188068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.188238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.188263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.188460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.188488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.188670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.188695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.188849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.188884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.189078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.189103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.189264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.189289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.189430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.189455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.189624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.189649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.189860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.189920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.190067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.190093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.190237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.190262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.190422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.190447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.190613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.190641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.190826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.190854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.191043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.191179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.191220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.191406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.191435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.191623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.191648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.191841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.191869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.192097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.192125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.192289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.192314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.192501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.192529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.192744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.043 [2024-07-15 19:19:51.192772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.043 qpair failed and we were unable to recover it. 00:25:11.043 [2024-07-15 19:19:51.192988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.193014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.193180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.193208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.193367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.193401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.193623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.193648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.193839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.193867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.194030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.194058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.194228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.194253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.194434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.194462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.194649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.194677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.194834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.194859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.195048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.195076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.195259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.195287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.195493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.195518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.195699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.195728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.195920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.195946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.196101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.196126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.196282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.196311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.196466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.196494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.196702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.196727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.196937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.196965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.197178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.197206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.197392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.197417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.197580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.197607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.197793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.197821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.198011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.198037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.198250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.198278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.198460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.198488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.198701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.198726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.198924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.198953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.199138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.199170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.199349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.199374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.199560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.199587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.199766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.199794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.199954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.199980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.200129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.200172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.200315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.200343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.200525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.200550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.200718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.200743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.200928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.200957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.201168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.201193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.201373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.201401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.201561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.201589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.201777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.201804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.201973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.202002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.202187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.202216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.202394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.202419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.202597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.202625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.202807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.202835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.203051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.203077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.203261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.203289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.203496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.203521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.203662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.203688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.203848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.203882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.204069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.204097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.204288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.044 [2024-07-15 19:19:51.204313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.044 qpair failed and we were unable to recover it. 00:25:11.044 [2024-07-15 19:19:51.204474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.204503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.204693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.204720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.204925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.204967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.205179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.205208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.205383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.205410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.205574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.205599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.205786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.205814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.205989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.206018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.206206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.206232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.206392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.206420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.206597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.206625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.206805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.206831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.207017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.207046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.207258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.207286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.207468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.207494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.207651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.207683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.207885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.207910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.208049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.208074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.208230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.208259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.208419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.208447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.208630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.208655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.208822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.208850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.209046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.209071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.209219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.209244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.209429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.209457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.209633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.209662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.209854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.209897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.210043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.210068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.210204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.210229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.210402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.210428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.210616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.210644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.210902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.210931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.211144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.211169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.211361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.211389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.211545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.211573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.211820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.211845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.212038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.212066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.212245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.212273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.212453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.212478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.212729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.212756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.212943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.212972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.213158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.213183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.213434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.213466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.213646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.213674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.213941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.213966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.214132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.214173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.214356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.214384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.214601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.214627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.214779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.214807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.215028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.215054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.215217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.215243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.215423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.215452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.215656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.215683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.215891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.215916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.216073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.216101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.216278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.216307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.216557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.216583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.216827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.216853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.217026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.045 [2024-07-15 19:19:51.217051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.045 qpair failed and we were unable to recover it. 00:25:11.045 [2024-07-15 19:19:51.217213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.217238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.217432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.217460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.217611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.217640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.217800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.217825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.218020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.218049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.218262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.218289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.218443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.218468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.218606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.218632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.218801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.218826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.218969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.218995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.219207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.219235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.219393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.219421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.219586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.219611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.219774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.219799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.220051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.220079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.220269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.220294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.220502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.220530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.220708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.220736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.220923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.220949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.221139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.221166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.221318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.221346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.221535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.221560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.221740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.221768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.221950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.221979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.222191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.222219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.222397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.222425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.222611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.222638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.222808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.222836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.223030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.223056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.223201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.223227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.223365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.223390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.223571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.223599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.223797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.223822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.223985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.224010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.224166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.224193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.224352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.224380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.224561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.224586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.224761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.224786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.224948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.224993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.225216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.225242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.225411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.225438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.225595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.225623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.225812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.225837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.226031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.226060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.226216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.226244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.226434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.226459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.226627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.226652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.226786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.226811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.226948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.226974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.227159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.227187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.227371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.227398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.046 [2024-07-15 19:19:51.227586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.046 [2024-07-15 19:19:51.227611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.046 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.227794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.227822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.228011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.228037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.228180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.228205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.228330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.228355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.228546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.228574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.228778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.228805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.229018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.229044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.229262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.229290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.229446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.229471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.229651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.229679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.229831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.229859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.230066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.230091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.230254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.230282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.230496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.230524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.230733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.230758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.230941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.230970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.231156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.231183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.231369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.231394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.231543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.231570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.231756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.231784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.231963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.231988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.232178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.232206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.232379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.232407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.232599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.232624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.232793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.232818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.233008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.233037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.233208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.233233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.233379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.233405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.233600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.233624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.233821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.233846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.234010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.234038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.234257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.234285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.234483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.234508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.234698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.234726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.234969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.234999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.235153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.235178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.235347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.235394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.235549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.235577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.235763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.235788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.236013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.236042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.236200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.236232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.236399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.236425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.236611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.236639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.236845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.236870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.237051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.237077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.237262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.237290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.237447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.237475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.237659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.237684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.237825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.237850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.238000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.238025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.047 [2024-07-15 19:19:51.238187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.047 [2024-07-15 19:19:51.238212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.047 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.238382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.238407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.238600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.238625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.238858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.238894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.239090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.239118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.239280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.239309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.239498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.239522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.239673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.239700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.239893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.239920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.240066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.240091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.240307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.240335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.240542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.240570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.240785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.240810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.241010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.241038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.241202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.241230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.241413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.241439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.241655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.241683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.241859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.241894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.242086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.242111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.242263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.242291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.242448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.242475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.242659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.242684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.242868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.242904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.243067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.243099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.243317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.243343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.243527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.243554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.243736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.243764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.243914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.243940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.244128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.244156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.244306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.244335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.244498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.244523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.244688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.244717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.244925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.244953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.245145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.245170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.245376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.245403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.245609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.245637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.245902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.245943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.246137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.246181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.246366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.246394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.246557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.246583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.246774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.246801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.246990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.247016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.247205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.247230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.247419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.247446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.247605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.247633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.247800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.247826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.248037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.248065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.248250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.248278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.248465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.248490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.248652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.248680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.248866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.248927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.249120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.249147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.249310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.249338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.249519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.249547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.249726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.048 [2024-07-15 19:19:51.249752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.048 qpair failed and we were unable to recover it. 00:25:11.048 [2024-07-15 19:19:51.249962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.249991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.250198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.250226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.250408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.250433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.250617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.250649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.250803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.250831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.251009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.251035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.251180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.251205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.251344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.251369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.251564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.251589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.251769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.251797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.251999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.252028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.252216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.252241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.252391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.252416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.252577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.252602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.252747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.252772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.252950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.252975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.253116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.253143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.253318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.253343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.253510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.253535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.253724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.253752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.253937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.253963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.254124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.254152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.254329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.254357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.254540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.254565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.254743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.254771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.254945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.254973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.255128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.255153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.255363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.255391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.255576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.255604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.255779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.255803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.255938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.255964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.256135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.256161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.256323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.256348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.256538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.256567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.256743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.256771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.256982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.257008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.257225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.257253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.257441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.257468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.257687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.257713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.257895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.257938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.258111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.258136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.258334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.258359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.258564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.258591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.258783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.258811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.259031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.259060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.259199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.259224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.259390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.259415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.259608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.259633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.259784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.259812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.260021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.260050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.260239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.260264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.260452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.260480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.260671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.260699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.260892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.260918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.261108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.261136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.261345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.261372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.261537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.261562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.049 qpair failed and we were unable to recover it. 00:25:11.049 [2024-07-15 19:19:51.261749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.049 [2024-07-15 19:19:51.261777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.261974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.262003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.262167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.262193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.262334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.262359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.262557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.262582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.262786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.262811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.262999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.263027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.263200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.263228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.263420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.263445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.263612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.263637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.263800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.263828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.264022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.264048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.264237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.264265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.264419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.264447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.264636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.264664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.264859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.264894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.265085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.265113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.265299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.265324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.265485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.265513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.265702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.265727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.265893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.265918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.266055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.266096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.266258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.266288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.266475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.266500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.266646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.266687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.266845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.266873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.267029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.267054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.267255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.267283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.267484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.267511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.267714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.267740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.267927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.267956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.268139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.268167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.268352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.268378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.268564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.268592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.268746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.268774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.268959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.268985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.269198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.269226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.269401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.269429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.269611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.269637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.269853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.269898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.270113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.270141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.270296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.050 [2024-07-15 19:19:51.270321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.050 qpair failed and we were unable to recover it. 00:25:11.050 [2024-07-15 19:19:51.270496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.270521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.270655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.270696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.270908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.270934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.271096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.271123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.271276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.271304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.271488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.271513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.271706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.271733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.271911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.271940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.272131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.272156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.272359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.272387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.272572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.272600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.272784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.272808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.272996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.273025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.273214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.273246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.273433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.273458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.273631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.273656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.273822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.273847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.274023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.274048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.274261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.274289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.274440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.274469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.274656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.274681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.274849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.274883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.275072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.275099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.275288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.275315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.275508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.275536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.275715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.275743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.275951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.275978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.276130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.276171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.276381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.276409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.276605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.276630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.276793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.276819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.277005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.277031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.277202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.277228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.277445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.277473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.277687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.277712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.277887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.277913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.278084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.278109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.278329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.278357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.278571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.278596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.278809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.278837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.279065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.279097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.279255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.279281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.279466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.279493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.279646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.279674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.279865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.279896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.280090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.280118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.280296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.280324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.280538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.280563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.280752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.280779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.280926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.280955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.281148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.051 [2024-07-15 19:19:51.281173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.051 qpair failed and we were unable to recover it. 00:25:11.051 [2024-07-15 19:19:51.281358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.281386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.281536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.281565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.281753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.281778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.281971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.282000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.282148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.282176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.282389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.282414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.282632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.282657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.282815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.282840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.283019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.283045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.283200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.283228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.283407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.283435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.283652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.283677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.283866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.283902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.284084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.284113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.284271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.284296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.284481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.284509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.284692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.284720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.284922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.284968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.285185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.285213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.285396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.285424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.285581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.285606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.285792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.285820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.286013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.286039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.286231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.286256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.286437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.286465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.286620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.286648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.286825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.286850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.287016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.287044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.287260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.287285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.287422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.287448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.287630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.287663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.287881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.287910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.288094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.288120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.288302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.288330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.288485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.288513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.288708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.288733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.288880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.288906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.289066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.289095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.289254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.289279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.289463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.289491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.289676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.289704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.289895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.289920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.290115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.290140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.290362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.290391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.290613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.290638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.290825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.290852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.291028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.291054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.291245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.291270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.291430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.291458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.291639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.291666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.291852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.291882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.292071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.292098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.292311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.052 [2024-07-15 19:19:51.292339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.052 qpair failed and we were unable to recover it. 00:25:11.052 [2024-07-15 19:19:51.292523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.292548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.292715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.292740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.292928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.292954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.293199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.293224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.293420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.293452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.293639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.293668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.293856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.293889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.294080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.294108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.294314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.294343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.294518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.294543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.294699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.294727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.294922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.294947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.295095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.295121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.295323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.295351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.295518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.295546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.295705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.295730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.295920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.295949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.296132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.296160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.296384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.296410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.296602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.296632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.296791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.296820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.297032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.297058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.297276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.297304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.297465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.297492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.297676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.297701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.297856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.297893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.298055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.298085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.298273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.298299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.298490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.298518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.298704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.298732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.298948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.298974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.299141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.299168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.299326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.299354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.299540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.299566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.299772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.299799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.299961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.299990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.300157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.300182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.300367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.300395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.300549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.300578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.300792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.300820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.300990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.053 [2024-07-15 19:19:51.301016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.053 qpair failed and we were unable to recover it. 00:25:11.053 [2024-07-15 19:19:51.301205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.301233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.301426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.301451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.301634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.301662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.301842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.301870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.302064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.302093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.302275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.302303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.302462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.302490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.302679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.302704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.302850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.302875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.303066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.303095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.303247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.303273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.303461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.303489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.303670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.303698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.303888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.303914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.304053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.304079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.304267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.304295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.304485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.304511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.304722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.304750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.304963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.304992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.305180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.305205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.305393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.305420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.305604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.305632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.305830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.305855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.306050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.306079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.306273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.306300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.306484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.306509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.306698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.306727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.306957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.306986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.307157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.307182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.307318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.307344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.307534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.307562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.307774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.307799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.308015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.308044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.308230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.308258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.308441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.308466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.308657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.308684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.308901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.308930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.309116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.309142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.309303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.309331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.309515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.309543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.309725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.309753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.309910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.309952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.310123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.310148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.310317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.310341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.310507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.310533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.310742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.310770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.310969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.310995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.311163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.311191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.311375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.311403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.311619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.311644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.311834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.311862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.312020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.312048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.312203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.312228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.312402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.312427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.312686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.312714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.312924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.312950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.313104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.313132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.313339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.054 [2024-07-15 19:19:51.313367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.054 qpair failed and we were unable to recover it. 00:25:11.054 [2024-07-15 19:19:51.313525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.313550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.313772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.313799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.313945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.313974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.314158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.314184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.314372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.314401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.314585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.314613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.314792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.314817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.315028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.315057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.315236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.315264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.315424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.315449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.315587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.315612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.315755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.315780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.315927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.315954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.316096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.316121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.316297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.316329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.316544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.316569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.316737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.316762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.316977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.317006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.317200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.317225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.317397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.317429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.317632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.317658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.317848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.317882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.318094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.318119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.318336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.318364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.318544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.318569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.318719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.318747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.318924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.318953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.319120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.319146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.319311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.319339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.319532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.319559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.319748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.319774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.319916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.319942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.320086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.320111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.320303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.320328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.320527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.320555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.320702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.320730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.320923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.320949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.321134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.321162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.321334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.321362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.321551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.321578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.321771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.321800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.321979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.322008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.322174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.322199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.322339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.322383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.322574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.322602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.322792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.322817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.323009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.323038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.323219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.323248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.323425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.323450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.323662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.323690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.323885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.323913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.324088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.324113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.324306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.324331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.324530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.324558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.324709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.324737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.324894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.324948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.325099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.325125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.055 qpair failed and we were unable to recover it. 00:25:11.055 [2024-07-15 19:19:51.325337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.055 [2024-07-15 19:19:51.325364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.325571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.325600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.325758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.325786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.325980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.326006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.326163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.326192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.326376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.326404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.326568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.326593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.326752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.326780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.326942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.326971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.327165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.327190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.327405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.327433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.327653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.327681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.327901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.327927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.328118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.328146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.328349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.328377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.328593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.328618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.328775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.328803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.329022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.329051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.329254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.329280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.329472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.329500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.329709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.329737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.329954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.329981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.330162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.330188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.330377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.330406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.330567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.330593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.330804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.330837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.331064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.331090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.331252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.331277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.331442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.331470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.331651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.331679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.331872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.331903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.332048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.332073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.332269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.332297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.332463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.332488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.332657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.332682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.332896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.332925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.333138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.333164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.333348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.333376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.333588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.333616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.333837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.333866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.334075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.334100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.334267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.334292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.334455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.334480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.334650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.334678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.334822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.334850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.335067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.335093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.335258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.335284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.335476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.335504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.335691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.335716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.335903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.335941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.336104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.336132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.336288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.056 [2024-07-15 19:19:51.336316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.056 qpair failed and we were unable to recover it. 00:25:11.056 [2024-07-15 19:19:51.336528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.336556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.336719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.336748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.336905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.336932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.337143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.337172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.337334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.337362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.337546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.337571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.337783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.337810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.338032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.338061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.338250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.338276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.338455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.338483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.338687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.338715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.338899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.338925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.339086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.339114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.339289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.339317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.339475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.339504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.339716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.339744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.339965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.339994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.340158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.340183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.340364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.340392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.340538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.340566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.340774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.340801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.340991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.341017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.341180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.341209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.341392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.341417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.341602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.341630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.341776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.341804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.342002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.342028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.342185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.342214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.342378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.342407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.342584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.342609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.342788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.342816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.342997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.343026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.343209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.343234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.343414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.343442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.343619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.343647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.343857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.343888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.344111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.344139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.344298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.344326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.344538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.344564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.344750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.344778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.344940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.344969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.345152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.345181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.345323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.345348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.345518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.345543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.345709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.345735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.345921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.345950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.346101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.346129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.346310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.346335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.346528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.346554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.346738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.346766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.346950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.346976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.347176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.347204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.347413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.347441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.347602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.347628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.347808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.347835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.348034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.348060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.348255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.348281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.348462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.348490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.057 [2024-07-15 19:19:51.348680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.057 [2024-07-15 19:19:51.348708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.057 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.348872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.348906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.349077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.349102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.349292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.349320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.349505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.349530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.349745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.349773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.349996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.350022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.350164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.350189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.350403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.350431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.350612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.350640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.350828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.350853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.351036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.351064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.351282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.351310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.351502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.351526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.351711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.351739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.351919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.351948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.352107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.352132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.352304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.352329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.352496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.352521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.352665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.352690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.352854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.352892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.353105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.353130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.353323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.353349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.353485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.353511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.353673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.353720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.353905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.353932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.354102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.354130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.354344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.354369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.354538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.354563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.354779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.354807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.354970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.354999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.355213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.355238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.355410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.355438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.355595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.355620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.355787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.355813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.356008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.356037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.356226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.356251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.356417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.356443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.356628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.356654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.356827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.356855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.357048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.357074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.357264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.357289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.357428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.357471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.357688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.357713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.357922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.357951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.358138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.358166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.358319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.358345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.358522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.358550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.358729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.358757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.358909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.358934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.359061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.359102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.359289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.359321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.058 [2024-07-15 19:19:51.359530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.058 [2024-07-15 19:19:51.359554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.058 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.359767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.359795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.360004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.360033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.360221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.360246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.360457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.360485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.360678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.360705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.360870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.360907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.361063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.361091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.361261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.361289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.361458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.361483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.361673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.361701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.361857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.361894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.362056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.362081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.362229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.362273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.362460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.362488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.362650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.362675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.362854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.362889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.363056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.363081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.363221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.363247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.363411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.363454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.363637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.363665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.363839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.363867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.364066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.364091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.364258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.364286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.364454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.364479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.364666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.364694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.364850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.364883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.365051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.365076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.365293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.365321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.365474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.365502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.365710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.365735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.365919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.365947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.366100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.366128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.366310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.366336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.366488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.366517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.366671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.366699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.366896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.366922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.367105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.367133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.367295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.367323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.367513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.367539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.367729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.367761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.367950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.367976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.368140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.368165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.368325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.368353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.368531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.368559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.368744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.368769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.368959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.368988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.369175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.369203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.369386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.369411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.369626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.369654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.369805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.369833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.370002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.370027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.370177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.370202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.370350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.370392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.370588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.370612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.370810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.370838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.059 [2024-07-15 19:19:51.371030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.059 [2024-07-15 19:19:51.371059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.059 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.371222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.371248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.371394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.371437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.371646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.371674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.371868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.371901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.372065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.372093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.372275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.372302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.372474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.372500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.372686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.372714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.372899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.372927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.373108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.373134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.373318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.373346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.373514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.373543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.373707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.373733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.373918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.373947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.374100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.374128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.374303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.374328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.374475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.374500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.374666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.374694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.374851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.374882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.375027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.375052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.375218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.375259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.375422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.375447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.375639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.375667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.375846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.375874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.376044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.376069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.376245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.376273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.376431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.376458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.376649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.376675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.376816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.376842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.377064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.377093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.377280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.377306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.377469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.377497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.377686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.377714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.377912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.377953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.378095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.378120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.378311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.378339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.378498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.378523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.378667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.378709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.378887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.378915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.379098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.379123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.379308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.379336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.379543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.379571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.379767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.379792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.379989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.380017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.380174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.380202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.380427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.380452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.380644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.380672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.060 qpair failed and we were unable to recover it. 00:25:11.060 [2024-07-15 19:19:51.380826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.060 [2024-07-15 19:19:51.380853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.381060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.381086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.381261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.381289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.381440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.381467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.381662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.381691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.381886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.381936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.382081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.382106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.382279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.382304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.382514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.382542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.382721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.382939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.382965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.383123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.383151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.383301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.383329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.383495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.383521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.383743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.383771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.383939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.383968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.384155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.384180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.384315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.384359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.384551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.384579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.384762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.384787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.384952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.384980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.385163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.385191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.385354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.385379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.385544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.385569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.385713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.385756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.385924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.385950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.386083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.386127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.386288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.386316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.386470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.386495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.386681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.386709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.386892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.386921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.387114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.387139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.387331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.387359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.387568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.387596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.387787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.387812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.387968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.387996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.388178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.388206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.388374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.388399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.388605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.388633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.388786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.388815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.389017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.389043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.389200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.389228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.389437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.389465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.389632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.389657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.389827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.389853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.390017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.390043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.390235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.390260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.390420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.390448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.390632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.390660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.390866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.390904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.391087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.391113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.391311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.391339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.391501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.391526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.391666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.391691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.391857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.391907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.392098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.392123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.392261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.392287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.392437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.392477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.061 qpair failed and we were unable to recover it. 00:25:11.061 [2024-07-15 19:19:51.392691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.061 [2024-07-15 19:19:51.392716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.392910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.392939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.393097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.393126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.393337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.393362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.393542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.393570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.393749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.393777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.393994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.394020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.394211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.394239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.394425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.394453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.394675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.394701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.394892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.394921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.395106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.395134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.395291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.395317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.395474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.395499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.395662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.395694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.395891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.395917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.396084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.396112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.396285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.396313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.396492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.396518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.396674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.396702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.396850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.396892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.397088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.397114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.397291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.397319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.397503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.397531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.397698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.397723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.397866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.397915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.398105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.398133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.398344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.398369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.398588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.398616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.398776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.398804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.398966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.398992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.399174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.399202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.399386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.399414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.399628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.399653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.399835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.399863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.400036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.400061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.400257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.400282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.400431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.400456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.400596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.400642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.400808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.400834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.401012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.401038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.401206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.401236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.401428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.401453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.401601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.401626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.401789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.401815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.401968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.401994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.402158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.402186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.402337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.402364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.402530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.402555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.402762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.402790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.402995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.403023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.403212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.403238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.403394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.403422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.403600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.062 [2024-07-15 19:19:51.403628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.062 qpair failed and we were unable to recover it. 00:25:11.062 [2024-07-15 19:19:51.403792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.403817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.403959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.403988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.404162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.404192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.404379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.404405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.404576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.404601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.404736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.404762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.404902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.404928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.405104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.405131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.405321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.405347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.405512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.405538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.405678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.405704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.405864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.405897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.406063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.406088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.406262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.406290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.406446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.406474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.406654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.406680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.406841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.406869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.407035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.407063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.407233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.407258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.407396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.407443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.407597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.407626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.407784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.407809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.408007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.408036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.408197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.408225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.408391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.408416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.408557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.408599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.408784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.408812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.408978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.409004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.409178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.409210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.409422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.409450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.409634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.409660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.409874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.409921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.410109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.410137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.410298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.410323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.410506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.410535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.410692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.410720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.410904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.063 [2024-07-15 19:19:51.410955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.063 qpair failed and we were unable to recover it. 00:25:11.063 [2024-07-15 19:19:51.411090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.411115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.411312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.411340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.411499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.411524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.411666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.411692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.411888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.411924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.412088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.412113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.412248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.412291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.412472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.412501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.412660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.412685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.412853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.412889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.413081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.413109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.413300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.413327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.413542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.413571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.413773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.413800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.413993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.414018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.414182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.414207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.414418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.414446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.414603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.414628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.414813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.414841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.415053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.415079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.415247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.415272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.415463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.415491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.415686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.415711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.415857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.415888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.416082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.416108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.416269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.416297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.416460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.416485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.416669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.416697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.416885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.416913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.417070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.417095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.417281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.417309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.417490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.417518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.417747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.417777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.417976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.418006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.064 [2024-07-15 19:19:51.418185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.064 [2024-07-15 19:19:51.418213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.064 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.418400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.418425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.418615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.418642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.418802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.418830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.418995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.419021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.419160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.419202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.419356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.419384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.419552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.419577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.419785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.419812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.419967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.419996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.420161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.420188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.420360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.420385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.420553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.420581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.420771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.420799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.420985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.421011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.421161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.421186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.421354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.421379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.421565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.421593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.421745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.421773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.421960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.421986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.422149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.422177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.422375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.422401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.422569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.422595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.422756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.422785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.422967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.422996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.423178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.423208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.423369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.423398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.423580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.423608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.423769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.423795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.423960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.423986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.424144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.424173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.424369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.424395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.424561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.424590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.424751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.424780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.424994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.425021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.425168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.425193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.425359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.425384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.425525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.425551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.065 [2024-07-15 19:19:51.425738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.065 [2024-07-15 19:19:51.425766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.065 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.425923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.425952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.426120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.426145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.426327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.426355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.426508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.426537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.426692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.426717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.426902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.426932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.427129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.427154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.427321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.427345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.427539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.427568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.427753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.427782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.427947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.427973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.428140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.428181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.428364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.428392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.428547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.428573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.428716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.428741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.428900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.428926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.429100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.429125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.429323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.429351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.429562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.429589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.429783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.429808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.430011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.430039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.430221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.430249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.430436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.430461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.430628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.430655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.430848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.430885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.431075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.431101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.431262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.431290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.431442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.431475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.431666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.431692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.431863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.431895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.432035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.432060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.432230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.432255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.432447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.432475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.432680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.432708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.432897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.432930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.433124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.433153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.433310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.433338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.433503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.433528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.433708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.433736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.433932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.433961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.434147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.066 [2024-07-15 19:19:51.434172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.066 qpair failed and we were unable to recover it. 00:25:11.066 [2024-07-15 19:19:51.434356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.434385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.434576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.434604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.434814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.434839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.435017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.435046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.435231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.435261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.435419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.435445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.435624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.435652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.435817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.435844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.436049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.436075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.436237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.436266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.436428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.436455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.436610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.436635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.436819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.436847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.437025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.437055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.437224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.437251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.437439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.437467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.437677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.437705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.437896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.437939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.438105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.438130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.438360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.438385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.438578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.438604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.438819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.438847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.439036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.439061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.439204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.439230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.439418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.439446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.439628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.439657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.439826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.439851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.440072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.440118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.440316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.440346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.440563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.440588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.440751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.440780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.440963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.440992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.441161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.441186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.441378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.441409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.067 [2024-07-15 19:19:51.441605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.067 [2024-07-15 19:19:51.441630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.067 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.441779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.441806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.441999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.442039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.442271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.442301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.442511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.442537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.442735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.442787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.442941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.442969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.443164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.443189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.443414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.443464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.443654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.443683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.443847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.443873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.444093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.444122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.444310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.444338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.444509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.444534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.444716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.444744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.444933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.444961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.445158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.445184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.445362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.445412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.445592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.445620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.445780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.445805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.446046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.446090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.446264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.446296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.446496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.446522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.446711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.446739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.446928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.446957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.447179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.447205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.447398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.447426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.447633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.447661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.447842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.447870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.448049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.448075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.448266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.448295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.448508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.448533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.448731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.448760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.448954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.448985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.449171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.449196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.449352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.449380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.449567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.449596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.068 [2024-07-15 19:19:51.449814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.068 [2024-07-15 19:19:51.449839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.068 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.450048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.450078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.450260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.450290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.450507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.450533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.450693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.450720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.450934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.450963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.451129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.451154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.451318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.451343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.451544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.451570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.451763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.451788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.451963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.451992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.452174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.350 [2024-07-15 19:19:51.452202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.350 qpair failed and we were unable to recover it. 00:25:11.350 [2024-07-15 19:19:51.452394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.452420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.452566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.452612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.452775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.452805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.453009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.453035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.453203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.453228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.453419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.453447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.453632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.453657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.453847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.453883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.454071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.454115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.454319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.454345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.454482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.454508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.454834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.454896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.455089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.455114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.455279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.455306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.455491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.455548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.455724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.455749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.455945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.455974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.456158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.456186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.456379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.456404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.456647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.456698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.456919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.456944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.457141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.457166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.457359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.457387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.457641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.457691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.457884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.457929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.458076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.458102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.458236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.458262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.458432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.458457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.458649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.458677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.458861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.458899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.459097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.459122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.459281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.459306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.459592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.459640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.459858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.459894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.460114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.460141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.460326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.460353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.460533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.460558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.460712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.460740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.460916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.460946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.461160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.461185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.351 [2024-07-15 19:19:51.461397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.351 [2024-07-15 19:19:51.461425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.351 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.461641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.461669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.461831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.461856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.462058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.462083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.462267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.462295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.462466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.462491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.462634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.462677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.462837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.462865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.463065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.463090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.463257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.463300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.463484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.463512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.463702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.463731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.463926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.463955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.464144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.464172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.464332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.464357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.464503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.464548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.464731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.464759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.464944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.464970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.465130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.465159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.465341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.465369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.465553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.465578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.465756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.465784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.465974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.466000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.466164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.466189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.466377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.466405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.466599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.466627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.466891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.466934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.467107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.467133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.467354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.467382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.467550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.467575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.467737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.467762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.467972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.468000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.468194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.468219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.468402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.468430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.468582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.468611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.468804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.468829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.469005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.469031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.469197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.469238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.469405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.469431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.469620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.469648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.469807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.352 [2024-07-15 19:19:51.469835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.352 qpair failed and we were unable to recover it. 00:25:11.352 [2024-07-15 19:19:51.470012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.470038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.470216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.470244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.470397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.470425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.470611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.470636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.470827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.470855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.471078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.471103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.471251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.471276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.471459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.471487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.471668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.471696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.471889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.471915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.472096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.472128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.472347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.472375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.472567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.472591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.472778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.472805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.472985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.473014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.473263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.473288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.473516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.473544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.473709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.473737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.473960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.473986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.474178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.474205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.474361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.474390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.474601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.474626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.474850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.474884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.475077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.475105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.475306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.475331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.475512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.475540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.475686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.475714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.475872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.475903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.476107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.476132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.476357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.476384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.476565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.476589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.476750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.476779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.476972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.477001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.477220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.477245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.477457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.477485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.477632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.477659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.477845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.477870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.478089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.478117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.478297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.478326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.478537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.353 [2024-07-15 19:19:51.478562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.353 qpair failed and we were unable to recover it. 00:25:11.353 [2024-07-15 19:19:51.478769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.478797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.479019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.479048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.479214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.479240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.479372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.479414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.479593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.479621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.479811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.479837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.480024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.480050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.480263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.480291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.480458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.480483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.480623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.480668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.480854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.480902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.481098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.481123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.481316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.481344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.481520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.481547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.481710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.481735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.481900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.481948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.482142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.482170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.482326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.482352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.482521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.482547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.482710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.482738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.482925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.482951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.483120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.483160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.483367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.483395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.483586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.483611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.483803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.483831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.484024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.484053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.484213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.484240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.484444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.484472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.484660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.484687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.484869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.484904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.485058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.485083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.485246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.485274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.485428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.485454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.485639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.485667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.485862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.485899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.486115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.486140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.354 [2024-07-15 19:19:51.486302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.354 [2024-07-15 19:19:51.486330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.354 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.486543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.486571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.486729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.486754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.486945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.486974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.487135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.487164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.487353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.487378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.487555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.487580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.487755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.487780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.487944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.487970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.488140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.488165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.488421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.488449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.488668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.488693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.488911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.488941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.489094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.489123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.489316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.489346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.489609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.489637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.489826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.489851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.490015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.490041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.490254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.490282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.490500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.490525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.490696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.490722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.490911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.490941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.491160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.491188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.491436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.491461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.491645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.491673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.491856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.491892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.492073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.492098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.492285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.492314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.492548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.492573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.492745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.492770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.492914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.492939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.493116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.493141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.493337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.493362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.493563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.493590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.493744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.493771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.493960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.493986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.355 qpair failed and we were unable to recover it. 00:25:11.355 [2024-07-15 19:19:51.494172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.355 [2024-07-15 19:19:51.494200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.494382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.494409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.494602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.494627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.494767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.494792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.495053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.495082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.495282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.495308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.495496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.495524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.495677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.495705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.495908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.495938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.496164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.496192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.496349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.496377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.496568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.496593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.496759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.496784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.496952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.496977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.497169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.497194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.497361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.497391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.497603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.497631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.497841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.497866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.498030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.498062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.498227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.498255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.498506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.498531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.498723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.498752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.498967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.498996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.499160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.499185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.499330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.499356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.499506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.499531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.499695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.499720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.499909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.499951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.500132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.500174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.500365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.500390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.500561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.500586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.500779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.500804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.501014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.501040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.501208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.501250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.501428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.501455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.501647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.501672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.501858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.501897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.502101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.502126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.502349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.502373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.502561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.502589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.356 qpair failed and we were unable to recover it. 00:25:11.356 [2024-07-15 19:19:51.502805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.356 [2024-07-15 19:19:51.502833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.503034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.503060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.503217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.503399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.503426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.503614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.503639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.503853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.503888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.504058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.504083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.504250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.504275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.504463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.504490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.504651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.504678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.504864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.504897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.505044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.505070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.505222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.505247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.505382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.505406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.505627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.505654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.505813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.505841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.506037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.506062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.506253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.506281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.506506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.506536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.506704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.506729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.507005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.507035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.507247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.507275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.507443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.507468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.507682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.507710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.507968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.507996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.508184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.508209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.508354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.508380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.508530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.508574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.508760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.508978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.509007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.509223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.509251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.509452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.509477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.509673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.509702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.509890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.509920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.510114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.510139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.510293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.510322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.510510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.510537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.510725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.510750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.510963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.510992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.511177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.511205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.511393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.357 [2024-07-15 19:19:51.511419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.357 qpair failed and we were unable to recover it. 00:25:11.357 [2024-07-15 19:19:51.511604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.511632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.511815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.511843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.512013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.512039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.512229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.512257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.512421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.512450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.512606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.512631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.512768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.512809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.512961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.512990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.513181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.513206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.513357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.513385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.513547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.513575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.513787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.513814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.514024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.514050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.514205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.514233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.514419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.514444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.514659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.514687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.514848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.514886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.515099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.515129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.515318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.515346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.515527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.515555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.515739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.515763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.515982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.516012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.516223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.516251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.516462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.516487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.516630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.516655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.516844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.516869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.517082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.517107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.517303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.517330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.517544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.517569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.517711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.517736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.517934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.517960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.518161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.518189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.518377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.518402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.518631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.518659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.518846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.518875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.519075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.519100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.519285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.519313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.519497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.519526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.519716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.519741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.519928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.519957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.358 [2024-07-15 19:19:51.520178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.358 [2024-07-15 19:19:51.520206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.358 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.520363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.520387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.520524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.520564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.520768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.520793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.520947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.520972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.521191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.521219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.521406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.521435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.521631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.521657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.521846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.521873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.522045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.522073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.522239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.522264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.522435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.522460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.522606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.522630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.522796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.522821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.523011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.523040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.523224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.523251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.523434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.523459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.523650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.523682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.523864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.523900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.524123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.524148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.524331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.524358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.524520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.524550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.524706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.524732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.524960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.524989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.525180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.525208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.525403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.525428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.525594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.525619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.525819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.525847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.526019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.526044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.526235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.526263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.526460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.526485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.526678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.526703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.526897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.526939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.527132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.527172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.527348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.527373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.527592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.527619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.359 qpair failed and we were unable to recover it. 00:25:11.359 [2024-07-15 19:19:51.527782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.359 [2024-07-15 19:19:51.527812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.528042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.528068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.528285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.528313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.528499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.528526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.528711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.528736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.528929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.528958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.529104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.529131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.529320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.529344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.529524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.529549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.529689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.529714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.529887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.529913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.530076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.530101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.530271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.530298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.530515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.530540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.530791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.530819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.531004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.531033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.531229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.531253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.531468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.531496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.531683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.531712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.531880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.531906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.532101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.532126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.532346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.532378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.532600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.532625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.532810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.532839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.533005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.533034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.533252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.533277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.533468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.533496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.533652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.533679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.533860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.533892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.534050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.534077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.534260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.534287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.534472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.534498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.534686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.534714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.534897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.534926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.535113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.535139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.535339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.535366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.535508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.535536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.535717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.535746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.535939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.535965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.536137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.536179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.536367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.536392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.360 qpair failed and we were unable to recover it. 00:25:11.360 [2024-07-15 19:19:51.536606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.360 [2024-07-15 19:19:51.536634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.536790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.536817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.536997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.537032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.537194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.537222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.537410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.537438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.537622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.537647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.537834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.537862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.538088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.538116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.538337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.538362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.538555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.538583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.538737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.538766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.538956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.538982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.539136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.539164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.539320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.539348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.539536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.539561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.539777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.539805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.539989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.540018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.540172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.540197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.540413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.540441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.540595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.540622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.540775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.540804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.540990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.541019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.541209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.541234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.541400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.541426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.541620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.541647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.541795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.541823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.542017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.542043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.542246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.542274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.542450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.542478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.542668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.542693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.542875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.542911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.543094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.543122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.543304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.543329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.543544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.543572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.543797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.543824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.544018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.544044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.544261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.544288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.544499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.544527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.544746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.544771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.544933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.544961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.545141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.361 [2024-07-15 19:19:51.545168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.361 qpair failed and we were unable to recover it. 00:25:11.361 [2024-07-15 19:19:51.545358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.545383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.545542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.545569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.545790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.545817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.545987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.546013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.546186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.546211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.546404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.546433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.546625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.546651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.546839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.546867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.547032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.547057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.547230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.547255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.547417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.547441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.547630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.547658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.547843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.547867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.548009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.548051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.548244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.548272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.548467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.548492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.548704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.548732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.548983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.549012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.549202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.549227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.549444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.549476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.549628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.549656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.549849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.549873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.550045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.550073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.550261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.550290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.550455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.550480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.550673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.550700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.550888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.550916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.551073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.551098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.551261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.551289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.551475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.551503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.551691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.551716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.551930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.551959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.552169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.552197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.552419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.552444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.552635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.552663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.552899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.552926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.553090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.553116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.553286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.553314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.553527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.553555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.553710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.553735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.553885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.362 [2024-07-15 19:19:51.553911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.362 qpair failed and we were unable to recover it. 00:25:11.362 [2024-07-15 19:19:51.554061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.554088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.554261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.554286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.554473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.554501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.554679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.554707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.554871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.554903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.555096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.555124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.555287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.555316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.555495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.555520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.555692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.555717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.555860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.555892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.556062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.556087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.556296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.556324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.556517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.556542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.556735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.556760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.556942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.556970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.557130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.557158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.557366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.557391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.557605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.557633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.557782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.557814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.558006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.558033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.558203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.558231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.558410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.558437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.558599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.558624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.558796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.558821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.558973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.558999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.559147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.559172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.559325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.559353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.559535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.559564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.559725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.559750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.559927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.559956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.560178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.560203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.560346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.560371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.560516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.560542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.560737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.560765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.560930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.363 [2024-07-15 19:19:51.560956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.363 qpair failed and we were unable to recover it. 00:25:11.363 [2024-07-15 19:19:51.561126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.561151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.561414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.561442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.561605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.561631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.561846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.561874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.562095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.562122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.562284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.562310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.562526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.562554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.562765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.562793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.562955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.562981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.563144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.563168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.563337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.563362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.563503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.563528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.563747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.563775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.563993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.564021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.564233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.564258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.564481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.564509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.564673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.564702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.564888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.564932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.565098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.565123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.565315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.565343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.565533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.565558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.565723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.565748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.565935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.565963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.566159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.566184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.566334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.566359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.566499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.566541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.566706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.566731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.566922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.566951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.567107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.567137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.567325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.567350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.567537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.567565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.567782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.567809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.567973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.567998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.568175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.568200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.568361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.568386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.568580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.568604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.568785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.568810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.569033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.569063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.569246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.569270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.569406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.569448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.569632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.364 [2024-07-15 19:19:51.569659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.364 qpair failed and we were unable to recover it. 00:25:11.364 [2024-07-15 19:19:51.569824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.569848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.570051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.570077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.570277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.570306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.570488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.570513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.570699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.570728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.570909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.570938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.571120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.571146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.571357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.571385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.571585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.571610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.571783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.571812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.571976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.572005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.572190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.572218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.572383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.572409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.572622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.572651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.572840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.572868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.573083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.573108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.573270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.573299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.573475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.573503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.573690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.573715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.573916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.573945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.574093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.574121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.574312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.574337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.574551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.574579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.574769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.574797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.575013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.575039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.575228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.575256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.575445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.575471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.575665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.575689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.575838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.575863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.576066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.576092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.576262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.576288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.576430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.576471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.576631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.576658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.576868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.576902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.577094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.577119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.577291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.577315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.577459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.577484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.577651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.577679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.577868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.577902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.578092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.578117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.365 [2024-07-15 19:19:51.578325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.365 [2024-07-15 19:19:51.578353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.365 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.578536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.578563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.578734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.578759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.578894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.578938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.579120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.579149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.579336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.579361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.579550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.579577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.579766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.579793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.580009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.580035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.580217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.580249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.580405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.580433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.580617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.580643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.580813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.580838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.581042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.581068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.581245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.581271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.581444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.581469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.581657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.581684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.581881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.581906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.582072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.582097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.582289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.582317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.582506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.582531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.582688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.582715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.582905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.582934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.583100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.583126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.583290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.583332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.583484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.583511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.583727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.583752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.583911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.583939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.584148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.584176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.584340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.584365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.584590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.584618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.584835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.584863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.585063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.585088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.585270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.585297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.585493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.585517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.585692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.585717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.585943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.585969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.586115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.586140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.586352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.586377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.586594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.586620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.586770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.586796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.366 qpair failed and we were unable to recover it. 00:25:11.366 [2024-07-15 19:19:51.586966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.366 [2024-07-15 19:19:51.586992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.587158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.587183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.587352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.587378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.587548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.587573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.587772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.587797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.587994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.588023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.588187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.588213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.588352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.588377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.588570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.588602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.588767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.588793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.588977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.589006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.589220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.589248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.589407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.589433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.589615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.589643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.589828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.589856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.590059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.590084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.590278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.590308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.590499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.590527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.590720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.590745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.590941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.590971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.591147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.591174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.591398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.591422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.591617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.591645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.591825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.591853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.592026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.592051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.592200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.592225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.592407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.592434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.592615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.592640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.592860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.592898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.593055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.593084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.593268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.593293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.593444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.593473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.593630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.593657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.593818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.593843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.594041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.594066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.594232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.367 [2024-07-15 19:19:51.594260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.367 qpair failed and we were unable to recover it. 00:25:11.367 [2024-07-15 19:19:51.594426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.594452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.594651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.594678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.594827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.594856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.595060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.595086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.595276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.595304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.595464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.595491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.595662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.595686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.595857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.595890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.596071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.596097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.596274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.596299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.596456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.596483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.596693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.596721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.596911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.596941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.597087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.597112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.597326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.597355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.597508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.597535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.597748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.597776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.597933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.597961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.598175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.598200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.598391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.598419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.598595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.598623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.598817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.598842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.599030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.599056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.599243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.599271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.599428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.599453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.599626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.599651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.599852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.599882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.600022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.600047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.600191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.600232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.600442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.600470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.600624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.600649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.600823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.600848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.601037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.601066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.601261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.601286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.601455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.601479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.601675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.601702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.601933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.601959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.602124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.602149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.602339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.602368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.602560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.602586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.602747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.368 [2024-07-15 19:19:51.602774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.368 qpair failed and we were unable to recover it. 00:25:11.368 [2024-07-15 19:19:51.602956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.602985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.603147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.603172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.603351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.603378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.603568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.603596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.603812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.603837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.603985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.604012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.604196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.604222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.604364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.604389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.604574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.604603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.604790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.604818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.604980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.605006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.605188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.605220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.605401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.605429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.605613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.605638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.605854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.605889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.606107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.606135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.606331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.606356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.606547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.606575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.606782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.606810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.607001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.607027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.607228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.607257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.607438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.607466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.607656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.607681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.607903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.607931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.608126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.608152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.608325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.608351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.608573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.608601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.608786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.608814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.608982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.609008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.609178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.609203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.609411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.609438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.609628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.609653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.609803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.609828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.609974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.609999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.610143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.610169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.610318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.610343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.610535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.610560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.610827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.610855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.611060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.611087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.611298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.611326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.369 [2024-07-15 19:19:51.611545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.369 [2024-07-15 19:19:51.611570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.369 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.611733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.611762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.611923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.611952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.612137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.612162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.612338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.612366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.612548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.612577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.612733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.612760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.612906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.612949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.613166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.613191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.613331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.613357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.613572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.613600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.613783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.613815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.614004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.614030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.614222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.614250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.614453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.614481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.614665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.614690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.614901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.614930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.615090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.615118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.615286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.615310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.615483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.615508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.615699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.615726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.615894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.615922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.616066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.616091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.616283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.616311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.616473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.616498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.616686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.616714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.616924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.616950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.617088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.617114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.617301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.617329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.617491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.617518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.617699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.617724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.617921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.617950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.618102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.618131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.618327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.618352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.618537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.618565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.618720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.618749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.618942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.618968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.619112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.619137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.619334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.619362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.619518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.619543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.619736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.619763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.370 qpair failed and we were unable to recover it. 00:25:11.370 [2024-07-15 19:19:51.619952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.370 [2024-07-15 19:19:51.619982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.620174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.620200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.620358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.620386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.620567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.620595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.620812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.620837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.621010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.621036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.621183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.621207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.621377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.621402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.621568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.621595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.621750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.621778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.622000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.622030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.622177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.622203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.622372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.622398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.622589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.622614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.622802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.622830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.623025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.623054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.623212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.623236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.623417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.623445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.623629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.623659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.623824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.623849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.623992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.624018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.624152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.624177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.624344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.624369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.624589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.624616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.624805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.624834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.625000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.625026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.625181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.625211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.625364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.625583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.625609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.625767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.625794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.625970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.625998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.626172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.626198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.626367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.626391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.626564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.626592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.626785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.626811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.626999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.627028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.627219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.627244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.627441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.627466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.627658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.627686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.627867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.627904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.628075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.371 [2024-07-15 19:19:51.628101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.371 qpair failed and we were unable to recover it. 00:25:11.371 [2024-07-15 19:19:51.628281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.628309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.628473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.628500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.628660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.628685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.628902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.628930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.629116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.629144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.629308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.629333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.629474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.629517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.629710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.629735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.629907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.629932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.630127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.630159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.630319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.630347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.630560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.630585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.630736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.630764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.630976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.631002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.631194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.631220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.631404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.631431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.631650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.631675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.631820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.631845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.631995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.632021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.632160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.632185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.632350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.632376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.632534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.632562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.632752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.632780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.632952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.632983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.633158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.633202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.633367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.633399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.633589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.633619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.633814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.633841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.634014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.634043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.634213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.634238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.634376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.634419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.634617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.634645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.634859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.634893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.635112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.635140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.635293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.635321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.372 [2024-07-15 19:19:51.635486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.372 [2024-07-15 19:19:51.635510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.372 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.635701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.635729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.635889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.635917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.636106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.636131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.636282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.636307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.636446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.636471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.636664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.636689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.636887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.636916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.637127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.637154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.637319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.637345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.637530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.637559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.637712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.637740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.637925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.637951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.638164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.638192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.638379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.638411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.638604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.638629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.638814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.638842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.639048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.639075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.639216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.639241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.639407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.639432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.639641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.639669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.639858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.639889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.640080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.640109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.640270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.640299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.640484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.640509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.640673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.640701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.640887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.640916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.641110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.641136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.641325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.641353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.641565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.641593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.641806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.641835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.642033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.642059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.642272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.642300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.642491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.642516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.642731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.642759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.642914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.642943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.643127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.643151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.643353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.643383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.643595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.643630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.643858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.643891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.644065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.644095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.373 qpair failed and we were unable to recover it. 00:25:11.373 [2024-07-15 19:19:51.644266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.373 [2024-07-15 19:19:51.644295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.644509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.644534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.644729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.644757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.644950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.644977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.645144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.645169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.645383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.645411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.645569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.645596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.645786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.645812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.646028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.646057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.646244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.646271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.646456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.646481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.646666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.646694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.646904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.646933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.647190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.647220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.647432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.647460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.647645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.647673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.647849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.647874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.648051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.648079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.648266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.648294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.648451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.648476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.648690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.648717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.648887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.648913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.649082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.649108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.649294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.649322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.649483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.649512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.649723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.649748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.649906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.649935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.650133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.650161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.650349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.650373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.650532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.650561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.650769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.650797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.650956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.650982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.651168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.651196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.651386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.651414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.651663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.651688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.651886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.651928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.652077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.652102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.652301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.652326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.652507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.652534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.652725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.652751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.652947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.652973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.374 qpair failed and we were unable to recover it. 00:25:11.374 [2024-07-15 19:19:51.653169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.374 [2024-07-15 19:19:51.653198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.653381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.653408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.653568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.653593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.653810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.653838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.654032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.654061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.654256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.654281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.654473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.654498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.654702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.654730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.654888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.654914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.655048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.655073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.655238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.655268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.655457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.655482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.655696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.655728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.655920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.655949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.656119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.656144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.656354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.656382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.656536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.656564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.656757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.656782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.656949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.656974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.657137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.657165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.657373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.657398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.657569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.657594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.657764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.657789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.657958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.657985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.658178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.658206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.658396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.658421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.658594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.658619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.658807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.658835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.659076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.659103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.659275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.659300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.659459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.659487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.659698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.659726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.659910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.659936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.660107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.660131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.660299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.660324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.660490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.660515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.660705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.660733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.660915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.660944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.661136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.661162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.661365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.661393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.661581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.375 [2024-07-15 19:19:51.661607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.375 qpair failed and we were unable to recover it. 00:25:11.375 [2024-07-15 19:19:51.661743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.661768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.661955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.662167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.662195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.662357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.662382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.662525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.662567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.662751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.662779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.662966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.662992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.663166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.663192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.663342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.663369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.663586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.663611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.663804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.663832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.664018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.664052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.664216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.664241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.664431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.664459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.664645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.664672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.664861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.664893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.665097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.665125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.665284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.665311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.665481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.665506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.665725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.665753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.665914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.665943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.666131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.666156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.666374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.666402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.666618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.666646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.666860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.666897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.667073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.667101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.667281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.667309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.667498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.667523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.667718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.667747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.667936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.667965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.668124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.668149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.668324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.668349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.668540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.668565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.668762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.668790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.668959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.668985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.669125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.376 [2024-07-15 19:19:51.669150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.376 qpair failed and we were unable to recover it. 00:25:11.376 [2024-07-15 19:19:51.669292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.669317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.669473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.669501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.669688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.669717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.669907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.669933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.670077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.670102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.670241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.670267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.670436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.670461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.670622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.670650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.670865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.670906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.671049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.671076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.671268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.671293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.671478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.671505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.671693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.671905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.671934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.672158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.672186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.672408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.672433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.672657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.672685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.672899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.672928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.673128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.673153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.673339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.673367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.673551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.673579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.673769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.673794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.673983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.674012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.674194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.674222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.674407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.674432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.674613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.674641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.674824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.674852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.675044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.675070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.675223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.675252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.675434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.675462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.675640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.675665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.675883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.675911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.676066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.676094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.676311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.676337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.676533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.676561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.676708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.676735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.676917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.676943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.677106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.677148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.677338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.677366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.677586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.677611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.677802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.377 [2024-07-15 19:19:51.677830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.377 qpair failed and we were unable to recover it. 00:25:11.377 [2024-07-15 19:19:51.677999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.678025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.678191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.678221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.678408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.678436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.678624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.678652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.678822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.678847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.679041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.679067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.679251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.679278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.679468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.679493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.679678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.679706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.679915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.679945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.680140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.680166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.680330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.680358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.680569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.680597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.680819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.680844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.680988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.681016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.681196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.681221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.681391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.681415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.681616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.681644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.681802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.681831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.682024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.682050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.682238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.682266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.682420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.682448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.682618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.682643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.682787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.682828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.683019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.683048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.683236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.683262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.683448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.683476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.683682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.683709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.683903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.683930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.684125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.684153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.684302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.684330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.684520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.684547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.684736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.684765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.684953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.684983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.685139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.685164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.685310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.685354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.685570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.685598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.685790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.685815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.685964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.685992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.686176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.686204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.378 qpair failed and we were unable to recover it. 00:25:11.378 [2024-07-15 19:19:51.686394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.378 [2024-07-15 19:19:51.686419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.686605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.686637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.686797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.686825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.687040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.687066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.687253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.687280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.687492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.687519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.687706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.687734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.687959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.687986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.688129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.688170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.688382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.688407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.688625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.688653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.688883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.688912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.689107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.689132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.689320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.689348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.689506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.689534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.689721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.689747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.689937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.689965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.690126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.690154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.690340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.690366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.690549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.690577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.690761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.690791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.690981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.691007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.691200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.691228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.691441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.691469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.691626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.691651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.691835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.691863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.692071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.692097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.692239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.692265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.692485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.692513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.692703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.692728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.692911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.692937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.693154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.693182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.693337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.693365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.693547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.693572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.693763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.693790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.694000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.694029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.694238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.694264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.694484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.694512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.694728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.694753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.694919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.694945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.695167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.695195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.379 qpair failed and we were unable to recover it. 00:25:11.379 [2024-07-15 19:19:51.695377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.379 [2024-07-15 19:19:51.695410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.695582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.695607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.695791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.695816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.695961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.695987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.696154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.696179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.696391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.696418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.696599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.696627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.696833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.696861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.697080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.697105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.697289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.697315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.697511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.697536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.697724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.697753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.697975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.698004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.698175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.698200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.698414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.698442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.698626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.698654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.698868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.698899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.699094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.699122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.699281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.699310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.699476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.699501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.699641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.699684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.699887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.699913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.700084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.700109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.700299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.700327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.700542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.700571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.700755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.700780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.700942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.700970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.701198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.701226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.701395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.701420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.701612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.701640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.701849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.701873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.702048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.702074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.702275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.702301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.702480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.702508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.702677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.702703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.702905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.380 [2024-07-15 19:19:51.702934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.380 qpair failed and we were unable to recover it. 00:25:11.380 [2024-07-15 19:19:51.703119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.703147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.703326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.703351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.703571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.703599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.703780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.703808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.703970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.704003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.704190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.704219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.704396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.704423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.704605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.704630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.704815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.704843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.705009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.705037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.705215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.705240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.705429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.705457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.705636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.705664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.705963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.705989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.706196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.706223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.706384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.706414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.706631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.706657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.706874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.706923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.707079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.707104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.707277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.707303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.707474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.707499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.707665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.707694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.707910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.707936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.708153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.708181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.708363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.708390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.708594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.708619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.708851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.708885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.709052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.709080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.709247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.709272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.709416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.709461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.709648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.709676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.709898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.709924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.710080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.710109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.710289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.710317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.710485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.710510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.710692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.710721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.710867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.710914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.711097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.711122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.711271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.711299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.711511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.711539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.711726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.381 [2024-07-15 19:19:51.711751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.381 qpair failed and we were unable to recover it. 00:25:11.381 [2024-07-15 19:19:51.711911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.711940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.712100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.712128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.712338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.712363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.712551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.712585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.712773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.712801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.712966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.712992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.713172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.713200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.713396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.713421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.713594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.713619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.713830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.713858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.714044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.714071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.714265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.714290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.714495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.714523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.714671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.714699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.714866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.714907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.715124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.715152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.715307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.715335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.715527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.715552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.715723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.715747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.715938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.715967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.716159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.716184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.716371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.716399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.716555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.716583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.716771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.716796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.716984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.717013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.717192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.717219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.717409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.717434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.717630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.717658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.717840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.717869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.718045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.718070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.718219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.718260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.718425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.718453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.718610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.718635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.718809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.718837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.719052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.719081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.719269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.719295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.719435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.719460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.719631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.719656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.719792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.719817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.719998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.720027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.382 [2024-07-15 19:19:51.720209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.382 [2024-07-15 19:19:51.720236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.382 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.720392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.720417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.720602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.720630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.720790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.720823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.721007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.721032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.721222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.721250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.721458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.721487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.721700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.721725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.721932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.721960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.722148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.722174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.722341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.722366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.722552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.722580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.722740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.722767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.722924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.722950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.723121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.723146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.723302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.723327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.723467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.723492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.723665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.723691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.723905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.723934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.724125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.724151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.724337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.724365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.724542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.724570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.724754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.724779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.724948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.724974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.725120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.725146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.725291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.725316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.725505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.725532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.725695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.725722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.725907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.725933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.726125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.726155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.726382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.726410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.726571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.726597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.726770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.726795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.726964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.726990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.727156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.727181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.727396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.727424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.727623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.727648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.727815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.727841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.728020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.728046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.728238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.728266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.728458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.728485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.383 [2024-07-15 19:19:51.728676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.383 [2024-07-15 19:19:51.728704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.383 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.728874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.728910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.729098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.729129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.729327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.729355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.729545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.729573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.729744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.729769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.729933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.729959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.730175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.730203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.730392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.730418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.730577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.730605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.730771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.730798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.731014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.731040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.731229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.731257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.731433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.731462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.731623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.731648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.731865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.731900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.732090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.732118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.732297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.732322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.732507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.732535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.732718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.732746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.732916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.732942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.733109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.733134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.733279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.733304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.733474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.733501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.733721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.733749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.733933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.733962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.734162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.734188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.734353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.734383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.734594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.734622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.734845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.734874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.735052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.735077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.735288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.735315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.735499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.735524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.735713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.735740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.735900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.735931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.736125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.736150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.384 [2024-07-15 19:19:51.736309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.384 [2024-07-15 19:19:51.736336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.384 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.736545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.736573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.736749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.736775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.736962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.736992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.737188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.737214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.737375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.737401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.737590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.737623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.737813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.737842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.738073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.738098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.738264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.738292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.738469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.738497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.738651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.738676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.738861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.738897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.739073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.739101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.739271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.739295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.739506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.739534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.739688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.739716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.739907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.739933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.740153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.740181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.740331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.740358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.740540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.740566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.740738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.740763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.740984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.741013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.741206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.741232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.741411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.741439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.741615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.741643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.741804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.741829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.742028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.742054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.742248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.742276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.742492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.742517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.742700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.742728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.742939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.742968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.743154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.743180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.743398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.743426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.743575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.743603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.743822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.743848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.744025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.744051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.744192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.744217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.744356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.744382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.744566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.744594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.744775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.744803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.385 qpair failed and we were unable to recover it. 00:25:11.385 [2024-07-15 19:19:51.744999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.385 [2024-07-15 19:19:51.745034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.745230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.745258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.745468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.745497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.745667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.745692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.745829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.745854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.746028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.746064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.746279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.746304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.746519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.746547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.746759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.746787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.746972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.746998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.747182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.747209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.747376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.747404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.747563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.747588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.747779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.747807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.747996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.748025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.748243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.748268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.748457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.748485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.748697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.748725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.749003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.749029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.749234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.749263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.749450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.749478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.749675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.749700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.749888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.749931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.750101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.750126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.750295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.750320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.750463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.750488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.750658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.750685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.750868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.750908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.751131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.751159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.751321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.751349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.751534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.751559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.751744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.751773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.751962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.751991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.752176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.752201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.752423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.752451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.752665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.752692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.752851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.752883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.753053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.753081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.753297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.753324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.753491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.753516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.753703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.753731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.386 qpair failed and we were unable to recover it. 00:25:11.386 [2024-07-15 19:19:51.753926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.386 [2024-07-15 19:19:51.753953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.754122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.754147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.754282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.754307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.754475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.754501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.754643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.754672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.754891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.754919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.755105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.755131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.755301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.755327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.755542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.755570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.755726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.755754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.755941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.755967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.756147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.756175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.756354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.756382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.756571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.756596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.756813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.756840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.757067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.757095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.757310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.757335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.757492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.757519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.757713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.757741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.757933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.757959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.758177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.758205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.758397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.758422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.758617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.758641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.758805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.758832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.759020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.759050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.759265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.759291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.759480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.759508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.759697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.759721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.759929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.759955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.387 [2024-07-15 19:19:51.760122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.387 [2024-07-15 19:19:51.760163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.387 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.760345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.760376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.760570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.760598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.760820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.760849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.761043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.761071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.761267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.761292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.761438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.761463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.761646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.761673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.761835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.761860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.762013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.762038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.762210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.762235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.762400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.762426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.762627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.762654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.762852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.762896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.763060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.763087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.763276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.763309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.673 [2024-07-15 19:19:51.763525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.673 [2024-07-15 19:19:51.763550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.673 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.763718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.763743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.763927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.763956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.764141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.764169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.764364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.764389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.764606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.764633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.764781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.764808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.764972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.764998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.765193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.765218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.765406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.765435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.765628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.765653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.765788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.765813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.765984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.766013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.766199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.766224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.766409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.766438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.766591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.766619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.766811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.766836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.767012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.767038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.767229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.767258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.767456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.767481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.767675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.767703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.767890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.767935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.768115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.768140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.768303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.768332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.768544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.768573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.768762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.768787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.769007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.769037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.769226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.769254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.769422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.769447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.769640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.769669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.769856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.769890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.770078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.770104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.770278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.770303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.770474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.770498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.770693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.770717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.770915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.770943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.771126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.771154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.771363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.771388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.771602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.771631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.771788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.771820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.772007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.772033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.674 qpair failed and we were unable to recover it. 00:25:11.674 [2024-07-15 19:19:51.772220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.674 [2024-07-15 19:19:51.772249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.772414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.772442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.772609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.772635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.772849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.772885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.773066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.773094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.773261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.773286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.773447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.773476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.773630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.773659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.773836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.773864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.774062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.774088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.774277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.774305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.774470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.774496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.774689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.774717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.774902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.774945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.775094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.775119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.775288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.775313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.775479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.775507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.775700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.775725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.775935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.775964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.776146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.776174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.776361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.776386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.776543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.776570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.776721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.776749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.776939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.776965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.777153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.777182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.777348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.777376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.777563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.777589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.777771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.777800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.777987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.778017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.778205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.778230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.778427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.778455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.778675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.778703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.778895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.778921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.779137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.779166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.779352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.779377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.779570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.779595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.779788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.779816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.780007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.780035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.780201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.780226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.780443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.780471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.675 qpair failed and we were unable to recover it. 00:25:11.675 [2024-07-15 19:19:51.780654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.675 [2024-07-15 19:19:51.780681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.780868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.780899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.781072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.781098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.781266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.781291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.781470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.781495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.781719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.781747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.781941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.781970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.782164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.782189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.782359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.782384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.782600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.782625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.782822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.782848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.783031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.783057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.783255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.783282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.783445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.783472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.783668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.783693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.783917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.783958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.784107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.784132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.784321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.784349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.784536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.784565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.784758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.784784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.784944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.784970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.785148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.785176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.785364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.785391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.785533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.785559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.785725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.785752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.785897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.785927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.786114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.786142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.786351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.786379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.786570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.786595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.786814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.786842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.787035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.787064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.787248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.787273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.787459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.787487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.787674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.787702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.787914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.787940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.788131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.788159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.788344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.788372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.788536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.788562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.788746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.788774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.788932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.788961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.789156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.789181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.676 qpair failed and we were unable to recover it. 00:25:11.676 [2024-07-15 19:19:51.789347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.676 [2024-07-15 19:19:51.789372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.789561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.789588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.789762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.789787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.790004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.790032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.790239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.790267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.790457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.790482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.790643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.790671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.790824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.790851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.791047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.791073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.791218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.791243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.791425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.791452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.791667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.791692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.791852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.791922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.792095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.792121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.792291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.792318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.792507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.792535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.792687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.792716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.792934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.792960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.793148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.793177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.793386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.793414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.793574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.793599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.793790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.793818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.794005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.794034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.794190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.794215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.794415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.794447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.794640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.794668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.794850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.794881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.795083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.795110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.795268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.795297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.795460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.795486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.795659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.795684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.795852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.795883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.796047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.677 [2024-07-15 19:19:51.796072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.677 qpair failed and we were unable to recover it. 00:25:11.677 [2024-07-15 19:19:51.796257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.796285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.796451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.796479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.796693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.796718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.796908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.796936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.797149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.797177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.797339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.797364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.797498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.797539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.797700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.797727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.797948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.797973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.798136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.798177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.798361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.798389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.798581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.798605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.798789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.798817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.799030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.799056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.799225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.799251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.799442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.799471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.799666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.799694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.799891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.799916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.800085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.800115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.800306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.800334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.800517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.800542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.800734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.800762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.800949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.800978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.801136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.801161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.801371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.801399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.801616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.801644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.801867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.801898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.802122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.802150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.802307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.802335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.802521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.802546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.802736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.802764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.802920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.802953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.803143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.803168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.803385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.803413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.803630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.803655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.803790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.803817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.804005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.804034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.804216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.804244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.804409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.804434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.804606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.804631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.678 [2024-07-15 19:19:51.804798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.678 [2024-07-15 19:19:51.804823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.678 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.805012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.805038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.805226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.805254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.805469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.805494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.805693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.805718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.805907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.805935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.806120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.806149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.806332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.806357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.806538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.806566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.806735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.806779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.806958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.806985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.807198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.807227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.807505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.807554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.807778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.807804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.807988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.808017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.808204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.808232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.808414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.808440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.808665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.808693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.808890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.808916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.809112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.809137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.809357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.809385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.809691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.809744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.809970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.809995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.810180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.810208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.810546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.810597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.810758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.810784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.811007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.811036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.811238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.811266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.811430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.811456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.811628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.811654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.811819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.811847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.812044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.812075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.812269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.812297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.812576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.812625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.812854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.812888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.813070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.813099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.813276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.813304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.813519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.813544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.813776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.813825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.814050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.814076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.679 [2024-07-15 19:19:51.814246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.679 [2024-07-15 19:19:51.814272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.679 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.814470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.814498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.814656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.814685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.814873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.814906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.815098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.815126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.815277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.815305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.815496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.815521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.815713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.815741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.815911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.815939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.816122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.816148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.816361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.816388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.816636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.816685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.816890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.816916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.817061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.817086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.817281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.817309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.817495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.817520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.817739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.817766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.817933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.817963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.818184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.818209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.818374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.818403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.818741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.818794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.818983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.819009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.819231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.819259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.819549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.819605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.819788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.819816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.820011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.820037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.820254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.820282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.820458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.820483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.820671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.820699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.820861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.820898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.821081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.821106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.821251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.821280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.821449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.821475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.821676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.821701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.821894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.821923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.822142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.822170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.822383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.822409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.822600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.822628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.822824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.822852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.823030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.823055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.823255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.823282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.680 [2024-07-15 19:19:51.823544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.680 [2024-07-15 19:19:51.823599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.680 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.823758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.823783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.823936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.823980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.824158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.824187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.824385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.824411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.824709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.824765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.824962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.824990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.825162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.825187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.825355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.825398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.825686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.825752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.825923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.825949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.826089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.826114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.826301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.826366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.826548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.826573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.826730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.826759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.826971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.827000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.827193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.827219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.827391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.827416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.827608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.827637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.827800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.827826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.827990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.828017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.828166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.828191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.828354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.828380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.828569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.828597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.828796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.828824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.829042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.829068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.829257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.829285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.829478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.829503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.829672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.829697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.829889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.829932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.830099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.830128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.830351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.830377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.830522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.830547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.830733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.830760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.830945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.830973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.831104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.831146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.831333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.831361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.831528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.831553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.831715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.831740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.831932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.831962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.681 qpair failed and we were unable to recover it. 00:25:11.681 [2024-07-15 19:19:51.832183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.681 [2024-07-15 19:19:51.832208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.832393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.832422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.832603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.832631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.832841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.832866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.833051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.833081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.833269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.833297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.833485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.833510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.833698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.833727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.833983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.834012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.834202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.834227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.834441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.834469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.834627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.834654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.834812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.834837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.835048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.835074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.835268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.835297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.835489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.835514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.835675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.835703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.835921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.835950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.836140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.836165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.836389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.836418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.836601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.836629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.836855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.836888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.837111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.837139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.837323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.837351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.837539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.837564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.837753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.837781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.837941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.837971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.838139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.838164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.838317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.838345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.838554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.838582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.838793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.838825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.838990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.839016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.839206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.839234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.682 qpair failed and we were unable to recover it. 00:25:11.682 [2024-07-15 19:19:51.839421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.682 [2024-07-15 19:19:51.839447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.839637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.839665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.839841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.839869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.840049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.840074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.840236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.840261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.840417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.840446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.840609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.840634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.840779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.840805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.840972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.841000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.841132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.841157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.841345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.841373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.841597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.841625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.841781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.841807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.841945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.841987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.842172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.842200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.842396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.842422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.842633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.842661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.842844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.842871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.843075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.843100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.843254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.843282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.843429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.843457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.843644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.843670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.843889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.843918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.844107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.844135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.844306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.844331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.844519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.844549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.844728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.844757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.844969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.844996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.845168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.845195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.845348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.845376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.845576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.845601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.845761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.845790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.845948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.845978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.846165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.846190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.846357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.846382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.846559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.846584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.846800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.846828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.847042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.847073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.847219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.847245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.847428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.683 [2024-07-15 19:19:51.847453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.683 qpair failed and we were unable to recover it. 00:25:11.683 [2024-07-15 19:19:51.847616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.847644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.847800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.847830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.848024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.848050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.848243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.848271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.848451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.848481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.848703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.848728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.848912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.848955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.849111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.849139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.849302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.849338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.849521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.849549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.849697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.849725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.849929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.849956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.850133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.850158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.850326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.850351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.850488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.850514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.850704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.850732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.850912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.850940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.851130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.851155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.851344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.851374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.851565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.851590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.851725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.851752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.851966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.851995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.852167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.852195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.852383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.852408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.852603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.852631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.852822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.852850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.853021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.853046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.853264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.853291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.853499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.853529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.853715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.853740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.853910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.853936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.854070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.854096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.854262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.854287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.854474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.854504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.854693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.854721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.854903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.854945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.855117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.855142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.855315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.855345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.855567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.855593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.855756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.684 [2024-07-15 19:19:51.855784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.684 qpair failed and we were unable to recover it. 00:25:11.684 [2024-07-15 19:19:51.855949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.855979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.856142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.856168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.856346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.856374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.856556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.856584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.856770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.856795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.856982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.857011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.857206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.857234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.857447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.857473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.857667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.857695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.857885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.857914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.858103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.858128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.858322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.858350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.858563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.858590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.858762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.858787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.858956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.858982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.859147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.859190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.859376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.859402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.859598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.859626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.859806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.859834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.860039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.860065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.860283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.860311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.860466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.860494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.860679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.860704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.860893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.860921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.861106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.861134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.861303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.861328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.861465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.861511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.861721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.861749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.861938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.861964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.862183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.862210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.862396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.862426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.862591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.862616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.862825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.862852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.863052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.863081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.863247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.863272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.863454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.863480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.863669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.863697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.863894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.863943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.864110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.864140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.685 [2024-07-15 19:19:51.864368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.685 [2024-07-15 19:19:51.864396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.685 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.864581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.864606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.864766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.864794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.865010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.865039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.865240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.865265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.865477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.865505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.865672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.865700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.865912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.865937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.866100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.866131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.866353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.866378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.866544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.866569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.866753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.866783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.866970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.866999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.867161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.867188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.867389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.867417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.867596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.867624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.867783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.867809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.867972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.868000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.868156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.868185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.868379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.868404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.868591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.868618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.868770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.868798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.868989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.869016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.869212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.869240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.869460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.869485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.869657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.869682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.869871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.869906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.870097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.870125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.870335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.870361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.870570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.870606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.870820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.870848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.871056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.871083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.871251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.871277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.871450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.686 [2024-07-15 19:19:51.871475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.686 qpair failed and we were unable to recover it. 00:25:11.686 [2024-07-15 19:19:51.871642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.871667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.871859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.871895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.872091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.872116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.872261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.872285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.872471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.872504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.872657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.872686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.872874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.872907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.873085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.873113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.873296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.873324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.873482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.873507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.873686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.873713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.873905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.873931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.874068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.874093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.874309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.874336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.874530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.874555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.874727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.874753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.874941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.874969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.875128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.875156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.875378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.875403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.875563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.875591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.875811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.875836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.876043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.876069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.876239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.876267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.876493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.876519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.876686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.876711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.876905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.876933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.877151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.877179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.877367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.877392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.877576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.877603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.877760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.877788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.878002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.878029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.878248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.878293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.878501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.878531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.878719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.878744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.878916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.878944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.879113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.879138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.879274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.879299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.879521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.879571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.879764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.879792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.879954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.879981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.880178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.687 [2024-07-15 19:19:51.880206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.687 qpair failed and we were unable to recover it. 00:25:11.687 [2024-07-15 19:19:51.880366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.880393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.880605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.880631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.880840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.880869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.881100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.881125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.881366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.881391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.881687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.881756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.881958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.881983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.882156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.882181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.882342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.882369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.882551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.882580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.882768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.882794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.882945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.882970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.883135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.883176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.883369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.883394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.883670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.883727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.884031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.884057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.884227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.884252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.884396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.884426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.884644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.884672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.884857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.884888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.885082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.885108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.885274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.885302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.885496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.885520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.885707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.885735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.885955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.885981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.886116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.886141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.886279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.886322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.886501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.886529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.886702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.886727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.886885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.886913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.887099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.887126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.887323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.887349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.887620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.887649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.887819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.887847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.888040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.888065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.888231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.888257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.888392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.888417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.888582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.888607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.888796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.888824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.888984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.688 [2024-07-15 19:19:51.889010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.688 qpair failed and we were unable to recover it. 00:25:11.688 [2024-07-15 19:19:51.889176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.889201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.889482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.889539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.889747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.889776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.890004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.890030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.890199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.890227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.890451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.890476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.890651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.890677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.890840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.890867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.891066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.891092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.891257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.891282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.891590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.891640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.891850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.891893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.892082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.892108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.892314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.892342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.892560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.892585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.892779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.892807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.892999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.893024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.893238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.893266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.893453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.893491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.893690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.893717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.893906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.893937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.894109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.894134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.894359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.894387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.894533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.894562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.894780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.894805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.894995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.895023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.895235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.895260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.895429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.895453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.895618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.895646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.895826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.895854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.896051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.896076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.896263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.896290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.896445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.896473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.896666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.896691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.896858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.896890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.897117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.897145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.897313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.897338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.897526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.897554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.897735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.897763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.897956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.897982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.898171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.898199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.898355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.689 [2024-07-15 19:19:51.898383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.689 qpair failed and we were unable to recover it. 00:25:11.689 [2024-07-15 19:19:51.898554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.898578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.898716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.898759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.898910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.898944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.899113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.899142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.899314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.899339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.899535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.899563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.899756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.899781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.899922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.899948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.900145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.900172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.900360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.900385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.900552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.900579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.900766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.900793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.900976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.901001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.901212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.901240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.901401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.901429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.901625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.901650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.901785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.901810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.901985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.902010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.902173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.902198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.902385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.902413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.902596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.902621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.902751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.902777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.902995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.903023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.903211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.903239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.903427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.903452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.903638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.903666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.903864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.903894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.904098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.904124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.904307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.904335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.904522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.904550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.904743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.904768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.904986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.905015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.905171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.905199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.905386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.905411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.905595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.905622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.905815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.905844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.690 [2024-07-15 19:19:51.906045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.690 [2024-07-15 19:19:51.906072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.690 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.906230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.906258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.906427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.906455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.906645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.906670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.906812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.906838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.907022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.907048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.907287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.907312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.907502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.907529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.907751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.907783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.907979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.908007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.908201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.908229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.908439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.908467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.908632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.908657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.908800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.908825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.909028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.909057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.909234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.909260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.909449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.909478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.909661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.909689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.909872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.909905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.910113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.910155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.910350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.910378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.910564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.910589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.910756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.910784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.911017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.911043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.911189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.911214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.911388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.911416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.911601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.911629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.911824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.911849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.912018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.912044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.912197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.912225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.912413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.912438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.912618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.912646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.912836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.912864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.913063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.913089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.913286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.913314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.913495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.913527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.913735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.913760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.913933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.913961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.914185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.914210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.691 [2024-07-15 19:19:51.914365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.691 [2024-07-15 19:19:51.914391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.691 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.914574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.914602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.914811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.914838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.915062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.915088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.915249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.915277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.915465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.915490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.915660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.915687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.915900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.915929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.916125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.916154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.916335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.916360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.916552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.916580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.916731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.916758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.916947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.916973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.917133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.917162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.917358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.917383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.917549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.917574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.917737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.917762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.917945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.917974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.918159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.918185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.918372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.918402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.918617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.918646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.918859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.918906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.919067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.919093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.919290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.919326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.919534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.919559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.919758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.919783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.919952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.919978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.920145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.920170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.920305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.920347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.920543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.920569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.920745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.920770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.920925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.920954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.921132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.921161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.921355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.921380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.921574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.921599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.921793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.921822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.922015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.922041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.922193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.922226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.922413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.922441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.922643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.922668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.922856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.922890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.923085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.692 [2024-07-15 19:19:51.923113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.692 qpair failed and we were unable to recover it. 00:25:11.692 [2024-07-15 19:19:51.923300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.923325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.923491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.923522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.923701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.923729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.923923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.923949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.924097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.924122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.924258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.924291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.924436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.924462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.924604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.924629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.924826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.924850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.925072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.925098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.925298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.925326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.925513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.925541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.925729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.925759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.925945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.925974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.926162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.926188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.926319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.926344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.926514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.926539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.926698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.926727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.926905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.926942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.927136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.927178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.927398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.927427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.927623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.927649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.927839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.927872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.928070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.928098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.928309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.928335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.928530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.928559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.928751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.928779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.928965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.928991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.929130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.929156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.929292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.929318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.929518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.929543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.929732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.929760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.929914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.929943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.930114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.930141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.930337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.930365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.930521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.930550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.930720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.930745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.930918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.930961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.931153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.931179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.931371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.931396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.931615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.931640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.693 qpair failed and we were unable to recover it. 00:25:11.693 [2024-07-15 19:19:51.931789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.693 [2024-07-15 19:19:51.931814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.931984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.932011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.932154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.932180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.932351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.932376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.932574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.932599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.932793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.932823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.932989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.933018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.933235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.933260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.933416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.933444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.933612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.933640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.933801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.933827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.934008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.934035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.934251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.934280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.934490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.934516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.934717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.934745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.934952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.934979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.935152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.935177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.935372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.935401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.935555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.935582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.935766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.935793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.935962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.935991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.936189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.936217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.936371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.936401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.936540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.936583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.936782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.936809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.936986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.937013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.937205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.937234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.937445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.937474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.937644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.937671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.937809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.937838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.938018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.938048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.938246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.938272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.938499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.938524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.938700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.938726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.938894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.938921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.939065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.939091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.939293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.939321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.939536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.939562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.939759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.939787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.939956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.939985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.940148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.940173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.940345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.694 [2024-07-15 19:19:51.940370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.694 qpair failed and we were unable to recover it. 00:25:11.694 [2024-07-15 19:19:51.940518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.940543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.940707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.940741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.940912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.940949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.941108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.941136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.941324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.941349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.941561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.941589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.941772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.941812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.941984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.942009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.942194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.942222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.942386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.942414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.942570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.942596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.942731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.942773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.942943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.942972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.943142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.943168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.943340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.943365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.943592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.943620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.943834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.943862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.944032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.944058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.944248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.944276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.944465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.944491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.944694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.944722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.944972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.945012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.945168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.945196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.945388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.945417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.945673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.945721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.945886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.945912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.946082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.946110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.946327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.946355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.946548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.946573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.946777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.946805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.946968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.946997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.947167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.947197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.947368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.947393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.947585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.947612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.947779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.947804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.695 qpair failed and we were unable to recover it. 00:25:11.695 [2024-07-15 19:19:51.948001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.695 [2024-07-15 19:19:51.948030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.948193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.948221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.948438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.948464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.948681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.948709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.948900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.948935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.949101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.949129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.949283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.949311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.949527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.949573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.949756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.949781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.949962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.949992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.950148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.950177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.950340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.950367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.950588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.950616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.950763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.950796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.950961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.950987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.951205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.951234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.951458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.951506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.951707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.951732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.951940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.951966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.952159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.952187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.952342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.952367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.952589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.952618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.952806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.952834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.953004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.953032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.953188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.953223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.953396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.953424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.953605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.953631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.953794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.953824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.954000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.954026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.954170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.954196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.954369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.954395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.954604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.954650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.954841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.954866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.955054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.955080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.955274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.955302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.955467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.955493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.955660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.955685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.955873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.955921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.956099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.956124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.956316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.956344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.956496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.956529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.696 [2024-07-15 19:19:51.956716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.696 [2024-07-15 19:19:51.956742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.696 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.956888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.956914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.957050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.957075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.957259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.957284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.957428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.957453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.957586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.957624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.957819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.957844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.958020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.958046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.958286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.958314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.958538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.958563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.958728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.958757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.958960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.958986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.959128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.959153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.959318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.959346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.959553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.959599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.959780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.959806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.959998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.960026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.960188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.960216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.960372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.960397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.960590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.960618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.960801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.960829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.961009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.961034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.961178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.961221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.961447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.961498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.961662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.961703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.961884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.961939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.962073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.962099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.962298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.962323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.962518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.962548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.962757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.962785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.962955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.962981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.963129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.963160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.963361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.963387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.963583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.963608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.963808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.963835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.964014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.964043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.964228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.964254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.964447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.964484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.964672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.964699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.964888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.964922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.965085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.965118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.965323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.697 [2024-07-15 19:19:51.965352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.697 qpair failed and we were unable to recover it. 00:25:11.697 [2024-07-15 19:19:51.965542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.965567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.965729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.965757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.965934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.965962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.966154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.966179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.966360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.966388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.966592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.966618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.966787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.966812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.966994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.967023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.967181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.967210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.967393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.967418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.967579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.967608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.967764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.967793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.967979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.968005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.968172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.968200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.968353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.968381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.968576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.968603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.968794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.968822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.968973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.969001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.969171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.969196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.969360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.969388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.969552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.969582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.969779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.969804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.969991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.970020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.970200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.970228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.970434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.970459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.970642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.970670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.970829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.970859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.971084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.971110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.971319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.971348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.971527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.971555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.971802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.971829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.972061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.972088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.972277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.972305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.972496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.972521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.972757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.972806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.973001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.973028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.973173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.973198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.973382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.973410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.973616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.973644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.973818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.973846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.974025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.974051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.698 [2024-07-15 19:19:51.974216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.698 [2024-07-15 19:19:51.974244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.698 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.974458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.974483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.974638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.974665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.974856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.974894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.975087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.975113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.975323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.975351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.975528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.975556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.975734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.975759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.975945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.975975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.976172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.976198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.976392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.976417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.976623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.976651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.976850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.976884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.977054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.977079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.977248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.977276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.977445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.977473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.977625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.977650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.977839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.977867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.978033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.978062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.978250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.978275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.978476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.978502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.978673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.978701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.978883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.978929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.979123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.979174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.979352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.979380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.979565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.979601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.979792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.979821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.980016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.980044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.980253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.980279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.980448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.980477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.980684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.980712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.980888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.980914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.981061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.981086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.981256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.981282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.981473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.981499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.699 [2024-07-15 19:19:51.981681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.699 [2024-07-15 19:19:51.981709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.699 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.981901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.981930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.982094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.982120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.982314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.982342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.982508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.982536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.982763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.982789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.982993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.983022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.983179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.983208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.983374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.983400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.983537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.983562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.983786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.983814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.983996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.984022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.984212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.984240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.984424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.984453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.984607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.984632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.984817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.984846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.985066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.985095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.985252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.985278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.985419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.985460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.985655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.985680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.985874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.985905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.986126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.986154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.986344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.986372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.986584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.986609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.986792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.986820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.987038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.987064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.987271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.987297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.987490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.987518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.987730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.987758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.987947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.987973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.988115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.988140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.988344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.988376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.988567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.988592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.988779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.988807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.988965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.988993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.989159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.989185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.989375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.989403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.989559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.989587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.989766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.989791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.989941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.989967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.990151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.700 [2024-07-15 19:19:51.990179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.700 qpair failed and we were unable to recover it. 00:25:11.700 [2024-07-15 19:19:51.990340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.990366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.990575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.990603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.990779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.990807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.990995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.991021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.991208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.991236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.991437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.991462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.991600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.991625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.991812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.991842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.992052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.992078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.992242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.992268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.992434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.992459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.992633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.992661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.992844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.992870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.993093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.993121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.993270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.993298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.993519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.993545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.993742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.993770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.993933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.993967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.994132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.994157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.994332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.994375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.994560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.994589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.994765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.994807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.994992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.995018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.995189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.995214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.995380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.995405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.995576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.995601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.995799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.995826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.996015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.996041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.996257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.996285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.996490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.996516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.996658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.996685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.996889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.996918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.997084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.997112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.997296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.997321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.997470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.997498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.997690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.997718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.997914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.997950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.998151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.998192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.998343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.998371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.998544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.998569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.998717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.998742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.701 [2024-07-15 19:19:51.998930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.701 [2024-07-15 19:19:51.998960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.701 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:51.999135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:51.999160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:51.999370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:51.999398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:51.999575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:51.999603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:51.999776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:51.999801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:51.999946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:51.999972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.000164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.000193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.000351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.000377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.000548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.000575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.000749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.000775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.000944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.000970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.001184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.001214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.001404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.001433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.001631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.001656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.001872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.001910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.002120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.002159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.002367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.002392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.002607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.002641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.002812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.002841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.003034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.003059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.003280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.003308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.003493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.003521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.003734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.003759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.003928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.003956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.004121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.004149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.004334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.004360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.004591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.004619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.004780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.004810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.004972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.004998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.005168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.005194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.005330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.005373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.005569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.005594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.005780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.005807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.006027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.006056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.006246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.006271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.006477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.006505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.006707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.006735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.006923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.006949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.007138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.007166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.007318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.007346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.007533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.007558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.007740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.702 [2024-07-15 19:19:52.007768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.702 qpair failed and we were unable to recover it. 00:25:11.702 [2024-07-15 19:19:52.007945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.007974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.008168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.008193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.008356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.008385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.008576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.008601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.008777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.008804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.008995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.009024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.009172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.009200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.009381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.009406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.009557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.009585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.009768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.009796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.010012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.010038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.010260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.010289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.010473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.010498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.010661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.010686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.010883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.010911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.011063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.011091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.011266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.011292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.011479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.011507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.011662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.011690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.011853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.011886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.012073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.012102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.012295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.012320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.012488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.012514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.012708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.012735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.012899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.012928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.013125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.013150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.013376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.013404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.013615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.013643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.013816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.013844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.014065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.014091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.014319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.014347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.014520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.014545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.014731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.014759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.014950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.014979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.015179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.015204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.703 qpair failed and we were unable to recover it. 00:25:11.703 [2024-07-15 19:19:52.015414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.703 [2024-07-15 19:19:52.015442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.015658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.015686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.015874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.015905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.016092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.016120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.016325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.016353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.016548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.016573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.016743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.016768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.016996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.017024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.017192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.017221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.017409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.017437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.017618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.017646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.017808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.017834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.018011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.018037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.018199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.018227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.018409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.018434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.018605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.018632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.018818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.018847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.019036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.019061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.019225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.019255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.019440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.019469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.019687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.019713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.019897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.019926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.020087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.020115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.020322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.020348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.020562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.020590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.020777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.020805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.021004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.021030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.021203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.021227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.021394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.021419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.021581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.021607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.021831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.021859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.022089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.022117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.022301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.022326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.022514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.022543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.022698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.022726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.022959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.022989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.023174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.023202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.023392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.023421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.023612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.023637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.023822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.023850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.704 [2024-07-15 19:19:52.024040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.704 [2024-07-15 19:19:52.024069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.704 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.024279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.024305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.024517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.024545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.024690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.024718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.024909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.024935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.025097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.025126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.025303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.025331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.025516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.025541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.025727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.025755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.025986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.026012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.026186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.026211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.026406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.026431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.026600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.026625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.026792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.026817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.027009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.027037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.027246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.027274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.027466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.027492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.027684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.027711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.027901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.028094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.028120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.028260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.028286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.028436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.028461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.028625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.028650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.028824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.028849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.029025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.029051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.029195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.029220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.029388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.029414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.029595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.029623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.029807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.029833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.029991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.030017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.030170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.030195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.030389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.030414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.030611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.030639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.030823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.030851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.031020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.031046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.031216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.031242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.031400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.031432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.031593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.031619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.031807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.031835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.031999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.032028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.032215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.032240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.032405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.705 [2024-07-15 19:19:52.032434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.705 qpair failed and we were unable to recover it. 00:25:11.705 [2024-07-15 19:19:52.032615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.032643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.032828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.032854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.033006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.033032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.033245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.033273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.033453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.033478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.033697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.033725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.033911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.033953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.034121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.034146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.034322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.034348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.034565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.034593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.034775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.034800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.034990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.035020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.035208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.035236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.035429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.035454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.035618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.035643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.035839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.035865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.036036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.036061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.036219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.036247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.036415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.036440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.036615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.036641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.036792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.036820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.037028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.037057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.037221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.037248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.037431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.037460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.037666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.037694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.037892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.037918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.038104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.038132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.038341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.038370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.038548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.038573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.038738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.038764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.038931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.038960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.039142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.039168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.039379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.039407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.039555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.039583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.039734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.039759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.039943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.039972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.040184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.040212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.040402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.040428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.040609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.040637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.040789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.040817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.041003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.041029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.706 [2024-07-15 19:19:52.041196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.706 [2024-07-15 19:19:52.041224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.706 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.041407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.041435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.041594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.041620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.041762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.041787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.041980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.042006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.042215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.042241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.042395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.042423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.042614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.042642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.042829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.042857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.043070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.043095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.043248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.043276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.043464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.043490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.043672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.043700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.043867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.043903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.044085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.044111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.044289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.044317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.044503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.044531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.044728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.044753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.044924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.044950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.045140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.045168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.045365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.045390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.045550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.045584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.045732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.045760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.045983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.046008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.046222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.046250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.046436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.046464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.046651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.046676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.046906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.046935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.047117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.047145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.047314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.047340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.047520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.047548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.047739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.047764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.047957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.047982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.048169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.048197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.048379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.048407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.048573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.048600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.048787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.048815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.048987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.049015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.049232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.049258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.049441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.049469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.049617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.049645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.049835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.707 [2024-07-15 19:19:52.049861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.707 qpair failed and we were unable to recover it. 00:25:11.707 [2024-07-15 19:19:52.050057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.050085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.050265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.050294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.050480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.050505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.050648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.050673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.050819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.050860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.051033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.051060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.051250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.051278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.051467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.051495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.051684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.051710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.051867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.051900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.052125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.052154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.052322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.052347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.052502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.052527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.052681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.052710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.052896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.052922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.053071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.053097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.053265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.053291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.053463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.053488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.053673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.053701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.053866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.053927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.054104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.054130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.054297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.054322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.054508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.054536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.054746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.054771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.054955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.054984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.055165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.055193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.055385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.055410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.055595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.055623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.055829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.055857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.056040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.056064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.056218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.056247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.056429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.056459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.056642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.056668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.056889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.056927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.057121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.708 [2024-07-15 19:19:52.057150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.708 qpair failed and we were unable to recover it. 00:25:11.708 [2024-07-15 19:19:52.057338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.057365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.057562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.057591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.057778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.057807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.057991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.058018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.058233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.058262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.058442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.058471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.058662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.058688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.058871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.058909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.059118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.059147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.059332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.059357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.059573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.059602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.059786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.059814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.059997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.060028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.060210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.060239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.060418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.060447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.060633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.060658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.060885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.060929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.061075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.061101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.061274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.061300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.061498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.061524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.061756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.061782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.061946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.061973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.062156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.062184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.062401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.062430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.062621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.062647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.062810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.062839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.063016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.063045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.063233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.063259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.063451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.063480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.063670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.063696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.063861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.063896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.064075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.064103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.064290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.064319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.064485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.064513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.064693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.064722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.064921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.064948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.065122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.065147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.065331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.065360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.065538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.065567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.065781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.065807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.066025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.066055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.709 [2024-07-15 19:19:52.066270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.709 [2024-07-15 19:19:52.066296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.709 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.066465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.066492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.066677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.066706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.066895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.066921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.067100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.067127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.067318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.067347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.067527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.067556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.067762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.067788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.067952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.067982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.068171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.068394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.068420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.068612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.068640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.068814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.068847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.069039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.069066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.069262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.069291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.069478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.069507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.069659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.069686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.069900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.069943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.070138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.070164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.070374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.070400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.070589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.070618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.070807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.070836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.071038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.071065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.071283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.071312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.071493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.071522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.071709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.071735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.071914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.071941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.072157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.072186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.072408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.072434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.072606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.072632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.072848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.072884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.073116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.073142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.073299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.073328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.073515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.073544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.073735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.073761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.073902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.073928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.074126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.074152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.074330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.074356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.074517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.074545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.074755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.074789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.074983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.075009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.075169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.710 [2024-07-15 19:19:52.075198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.710 qpair failed and we were unable to recover it. 00:25:11.710 [2024-07-15 19:19:52.075384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.075413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.075630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.075655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.075827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.075855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.076052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.076081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.076297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.076323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.076520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.076549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.076762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.076791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.076993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.077019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.077226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.077255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.077467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.077496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.077689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.711 [2024-07-15 19:19:52.077716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.711 qpair failed and we were unable to recover it. 00:25:11.711 [2024-07-15 19:19:52.077903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.077933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.078092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.078122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.078314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.078342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.078534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.078564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.078776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.078805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.079022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.079048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.079236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.079265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.079447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.079476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.079667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.079693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.079888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.079932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.080078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.080104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.080301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.080327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.080513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.080542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.080751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.080780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.081012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.992 [2024-07-15 19:19:52.081040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.992 qpair failed and we were unable to recover it. 00:25:11.992 [2024-07-15 19:19:52.081192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.081218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.081390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.081418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.081582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.081608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.081745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.081787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.081936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.081965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.082137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.082164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.082377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.082406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.082622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.082648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.082786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.082812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.082973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.083017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.083208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.083234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.083427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.083453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.083672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.083705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.083932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.083962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.084180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.084206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.084396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.084424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.084578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.084608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.084770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.084796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.084956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.084986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.085159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.085188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.085376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.085403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.085586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.085614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.085772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.085801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.085959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.085986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.086146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.086174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.086364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.086392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.086591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.086617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.086791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.086817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.087035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.087065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.087257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.087283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.087502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.087531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.087686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.087716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.087916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.087942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.088136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.088165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.088381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.088409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.088598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.088624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.088818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.088847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.089043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.089072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.089244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.089270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.089451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.089487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.089702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.089730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.993 [2024-07-15 19:19:52.089896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.993 [2024-07-15 19:19:52.089923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.993 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.090073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.090117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.090301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.090330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.090485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.090646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.090688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.090871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.090909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.091103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.091129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.091342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.091371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.091531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.091560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.091721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.091747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.091919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.091946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.092112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.092138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.092286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.092312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.092523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.092552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.092720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.092749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.092959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.092986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.093181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.093209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.093421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.093450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.093646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.093672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.093812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.093838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.094042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.094071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.094270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.094296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.094483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.094512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.094693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.094722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.094890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.094917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.095100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.095129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.095315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.095344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.095562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.095588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.095773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.095803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.096025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.096052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.096221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.096247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.096439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.096465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.096692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.096721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.096909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.096936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.097093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.994 [2024-07-15 19:19:52.097121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.994 qpair failed and we were unable to recover it. 00:25:11.994 [2024-07-15 19:19:52.097332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.097362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.097545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.097571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.097754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.097783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.097982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.098012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.098203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.098234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.098396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.098425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.098623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.098649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.098872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.098924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.099124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.099166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.099357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.099386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.099550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.099576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.099758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.099786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.099965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.099994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.100150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.100178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.100390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.100419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.100627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.100656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.100869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.100903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.101122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.101151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.101310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.101339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.101506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.101533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.101719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.101747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.101931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.101961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.102175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.102201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.102343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.102387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.102574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.102602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.102818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.102844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.103039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.103068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.103245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.103274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.103430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.103456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.103648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.103677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.103862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.103901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.104095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.104125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.104326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.104355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.104546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.104575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.104753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.104778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.105006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.105036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.105188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.105217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.105405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.105432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.105650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.105678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.105859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.105896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.106109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.106135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.995 qpair failed and we were unable to recover it. 00:25:11.995 [2024-07-15 19:19:52.106303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.995 [2024-07-15 19:19:52.106333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.106549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.106578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.106768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.106794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.106975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.107005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.107225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.107254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.107425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.107451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.107586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.107612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.107828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.107857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.108055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.108081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.108304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.108333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.108495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.108524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.108734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.108763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.108961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.108987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.109135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.109178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.109361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.109388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.109580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.109608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.109764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.109793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.109981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.110008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.110237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.110266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.110477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.110506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.110718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.110744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.110935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.110965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.111130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.111159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.111369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.111395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.111551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.111579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.111730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.111759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.111980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.112007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.112198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.112227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.112414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.112442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.112628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.112655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.112871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.112908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.113098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.113131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.113300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.113326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.113513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.113543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.113701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.113730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.113916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.113943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.114106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.996 [2024-07-15 19:19:52.114135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.996 qpair failed and we were unable to recover it. 00:25:11.996 [2024-07-15 19:19:52.114291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.114321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.114541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.114567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.114777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.114806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.115005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.115032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.115205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.115231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.115443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.115472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.115630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.115659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.115882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.115908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.116146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.116172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.116350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.116376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.116578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.116604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.116798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.116827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.117046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.117075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.117303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.117329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.117475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.117502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.117643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.117668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.117841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.117867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.118054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.118080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.118248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.118274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.118440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.118466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.118655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.118685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.118872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.118911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.119081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.119107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.119248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.119291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.119445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.119474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.119645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.119672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.119836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.119887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.120099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.120128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.120316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.120342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.120531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.120562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.120741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.120770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.120958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.120988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.121204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.997 [2024-07-15 19:19:52.121231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.997 qpair failed and we were unable to recover it. 00:25:11.997 [2024-07-15 19:19:52.121417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.121445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.121662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.121688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.121874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.121912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.122130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.122156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.122305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.122331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.122472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.122498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.122722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.122750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.122945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.122973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.123155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.123184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.123367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.123396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.123586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.123612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.123801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.123830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.124050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.124076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.124265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.124294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.124449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.124478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.124691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.124718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.124908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.124938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.125121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.125149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.125342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.125371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.125551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.125578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.125798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.125826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.126013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.126043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.126236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.126266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.126463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.126489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.126651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.126680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.126859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.126897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.127059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.127087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.127252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.127278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.127469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.127498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.127711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.127743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.127936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.127966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.128157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.128183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.128366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.128395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.128577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.128606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.128781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.128810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.128973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.129000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.129142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.129168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.129381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.129409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.129595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.129623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.129840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.129866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.130080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.998 [2024-07-15 19:19:52.130106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.998 qpair failed and we were unable to recover it. 00:25:11.998 [2024-07-15 19:19:52.130318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.130346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.130529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.130557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.130742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.130768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.130965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.130995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.131184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.131213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.131403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.131432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.131617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.131644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.131833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.131862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.132093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.132119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.132288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.132314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.132483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.132510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.132698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.132727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.132964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.132993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.133180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.133209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.133398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.133423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.133603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.133632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.133827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.133856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.134057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.134087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.134281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.134308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.134472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.134501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.134712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.134742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.134921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.134950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.135140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.135166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.135362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.135391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.135573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.135602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.135789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.135817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.136004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.136030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.136226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.136252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.136441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.136471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.136611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.136645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.136830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.136859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.137080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.137106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.137301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.137327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.137519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.137545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.137742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.137771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.137965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.137992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.138154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.138180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.138325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.138351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.138547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.138573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.138748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.138777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.138966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.138994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:11.999 [2024-07-15 19:19:52.139163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:11.999 [2024-07-15 19:19:52.139189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:11.999 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.139381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.139409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.139625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.139654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.139819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.139845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.140015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.140042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.140230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.140259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.140470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.140496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.140687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.140714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.140909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.140936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.141133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.141176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.141364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.141393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.141609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.141635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.141802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.141828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.141968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.141995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.142133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.142177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.142338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.142369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.142563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.142589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.142779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.142808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.143001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.143028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.143205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.143231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.143392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.143418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.143676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.143705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.143863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.143902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.144098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.144124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.144298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.144324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.144518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.144547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.144732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.144761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.144934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.144962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.145103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.145130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.145306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.145332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.145524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.145553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.145720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.145746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.145940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.145967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.146134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.146176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.146364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.146391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.146551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.146578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.146772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.146798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.146988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.147014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.147176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.147220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.147413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.147440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.147584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.147625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.147838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.000 [2024-07-15 19:19:52.147867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.000 qpair failed and we were unable to recover it. 00:25:12.000 [2024-07-15 19:19:52.148064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.148090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.148262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.148288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.148478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.148504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.148644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.148671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.148889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.148918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.149105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.149132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.149325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.149351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.149707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.149756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.149951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.149978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.150148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.150174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.150375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.150401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.150621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.150650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.150832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.150862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.151063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.151089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.151287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.151317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.151495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.151524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.151745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.151771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.151919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.151947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.152140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.152166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.152345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.152374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.152564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.152593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.152772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.152802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.153027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.153054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.153199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.153225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.153418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.153444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.153678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.153705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.153899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.153926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.154072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.154098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.154302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.154331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.154503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.154529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.154670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.154697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.154895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.154924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.155109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.155136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.155332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.155358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.155539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.155565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.001 [2024-07-15 19:19:52.155784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.001 [2024-07-15 19:19:52.155813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.001 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.156003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.156030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.156201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.156227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.156396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.156423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.156714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.156762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.156957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.156984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.157154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.157184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.157374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.157401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.157645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.157673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.157901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.157943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.158145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.158171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.158342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.158367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.158577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.158605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.158764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.158790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.158954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.158981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.159115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.159141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.159312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.159338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.159565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.159591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.159761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.159787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.159928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.159955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.160178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.160207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.160429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.160455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.160652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.160678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.160874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.160913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.161077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.161107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.161257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.161286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.161475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.161501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.161674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.161700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.161843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.161869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.162077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.162107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.162299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.162325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.162470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.162496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.162651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.162681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.162838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.162868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.163067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.163094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.163288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.163315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.163535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.163563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.163723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.163751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.163914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.163941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.164133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.164159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.164352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.164381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.164595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.002 [2024-07-15 19:19:52.164625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.002 qpair failed and we were unable to recover it. 00:25:12.002 [2024-07-15 19:19:52.164847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.164872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.165051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.165077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.165267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.165296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.165482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.165511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.165669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.165696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.165858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.165906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.166081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.166110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.166261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.166289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.166479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.166505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.166687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.166712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.166873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.166909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.167096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.167126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.167343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.167369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.167539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.167565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.167731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.167760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.167950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.167979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.168199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.168225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.168417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.168444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.168606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.168635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.168854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.168892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.169084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.169110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.169281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.169307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.169502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.169528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.169717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.169742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.169906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.169933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.170102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.170127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.170285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.170313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.170527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.170556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.170801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.170830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.171053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.171080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.171245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.171274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.171464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.171490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.171625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.171655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.171820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.171846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.172029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.172055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.172240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.172269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.172481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.172507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.172670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.172695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.172891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.172921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.173140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.173327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.173353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.003 [2024-07-15 19:19:52.173555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.003 [2024-07-15 19:19:52.173581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.003 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.173772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.173801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.173954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.173984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.174194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.174219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.174386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.174412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.174574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.174604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.174803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.174829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.175006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.175033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.175229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.175255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.175451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.175480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.175646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.175675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.175864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.175897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.176072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.176098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.176291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.176317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.176515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.176543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.176708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.176750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.176950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.176977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.177142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.177168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.177364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.177392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.177576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.177602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.177771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.177797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.178011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.178042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.178270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.178299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.178459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.178485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.178679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.178705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.178859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.178897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.179080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.179109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.179296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.179322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.179486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.179511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.179724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.179753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.179939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.179969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.180177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.180203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.180374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.180404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.180635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.180664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.180883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.180913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.181103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.181129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.181295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.181321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.181554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.181582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.181803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.181832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.182037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.182064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.182232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.182258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.182461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.004 [2024-07-15 19:19:52.182487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.004 qpair failed and we were unable to recover it. 00:25:12.004 [2024-07-15 19:19:52.182658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.182684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.182857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.182892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.183069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.183095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.183286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.183315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.183477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.183508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.183684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.183713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.183901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.183943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.184108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.184134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.184296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.184326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.184510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.184536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.184699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.184726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.184943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.184973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.185172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.185200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.185425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.185452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.185616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.185642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.185816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.185842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.186008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.186038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.186256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.186282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.186477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.186504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.186695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.186724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.186888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.186919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.187119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.187145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.187285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.187311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.187530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.187558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.187708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.187738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.187912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.187939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.188076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.188102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.188290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.188319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.188465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.188494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.188679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.188705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.188897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.188924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.189123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.189152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.189368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.189397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.189555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.189583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.189756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.005 [2024-07-15 19:19:52.189782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.005 qpair failed and we were unable to recover it. 00:25:12.005 [2024-07-15 19:19:52.189967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.189997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.190209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.190238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.190400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.190426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.190571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.190597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.190812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.190841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.191037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.191063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.191207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.191234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.191428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.191454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.191671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.191700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.191860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.191898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.192099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.192125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.192323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.192349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.192570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.192599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.192797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.192823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.192999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.193026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.193222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.193248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.193421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.193450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.193641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.193669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.193826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.193853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.193894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc200e0 (9): Bad file descriptor 00:25:12.006 [2024-07-15 19:19:52.194112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.194151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.194372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.194401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.194627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.194653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.194892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.194938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.195107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.195133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.195300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.195326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.195517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.195546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.195757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.195785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.195976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.196003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.196195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.196221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.196443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.196472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.196671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.196697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.196851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.196887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.197078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.197104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.197304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.197330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.197681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.197730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.197940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.197969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.198139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.198166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.198363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.198389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.198555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.198584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.198750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.198777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.198972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.006 [2024-07-15 19:19:52.198999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.006 qpair failed and we were unable to recover it. 00:25:12.006 [2024-07-15 19:19:52.199181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.199210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.199370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.199398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.199632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.199682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.199848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.199885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.200082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.200107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.200297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.200326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.200480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.200510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.200679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.200705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.200930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.200977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.201156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.201198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.201389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.201415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.201604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.201658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.201844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.201873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.202052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.202078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.202290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.202319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.202499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.202527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.202685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.202711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.202946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.202972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.203141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.203167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.203370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.203396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.203716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.203775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.203980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.204006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.204199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.204225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.204416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.204445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.204632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.204660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.204828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.204853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.205055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.205081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.205272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.205301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.205493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.205520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.205661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.205704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.205908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.205952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.206159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.206185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.206413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.206441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.206600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.206629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.206821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.206847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.207052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.207079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.207297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.207326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.207506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.207532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.207718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.207746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.207913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.207942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.208130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.007 [2024-07-15 19:19:52.208157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.007 qpair failed and we were unable to recover it. 00:25:12.007 [2024-07-15 19:19:52.208337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.208366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.208575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.208605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.208803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.208829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.208994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.209020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.209206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.209235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.209424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.209450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.209642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.209673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.209857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.209905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.210101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.210127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.210297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.210323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.210487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.210515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.210683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.210709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.210849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.210875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.211084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.211127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.211322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.211348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.211545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.211794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.211822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.212046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.212073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.212265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.212294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.212476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.212505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.212724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.212750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.212948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.212975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.213117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.213144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.213354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.213380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.213605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.213634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.213851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.213887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.214087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.214112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.214303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.214331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.214488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.214518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.214713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.214739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.214924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.214954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.215171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.215197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.215389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.215416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.215599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.215628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.215813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.215847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.216070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.216097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.216293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.216323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.216477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.216505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.216692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.216718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.216939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.216968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.217186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.008 [2024-07-15 19:19:52.217215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.008 qpair failed and we were unable to recover it. 00:25:12.008 [2024-07-15 19:19:52.217405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.217432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.217625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.217653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.217839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.217868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.218078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.218105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.218281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.218310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.218521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.218550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.218713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.218739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.218906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.218936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.219122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.219151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.219322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.219348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.219512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.219538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.219727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.219755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.219949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.219976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.220192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.220221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.220400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.220429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.220623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.220649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.220819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.220847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.221051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.221081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.221280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.221505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.221531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.221736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.221766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.221948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.221976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.222138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.222167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.222382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.222408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.222575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.222601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.222744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.222770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.222912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.222939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.223108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.223134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.223326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.223355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.223565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.223593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.223787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.223814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.224000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.224029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.224250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.224279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.009 qpair failed and we were unable to recover it. 00:25:12.009 [2024-07-15 19:19:52.224442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.009 [2024-07-15 19:19:52.224473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.224689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.224718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.224911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.224938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.225087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.225113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.225296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.225325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.225486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.225516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.225700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.225727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.225916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.225947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.226129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.226159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.226324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.226350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.226521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.226547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.226736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.226766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.226967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.226994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.227214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.227243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.227432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.227461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.227647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.227674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.227892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.227922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.228083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.228111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.228302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.228328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.228502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.228528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.228746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.228775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.228929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.228956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.229169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.229198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.229412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.229441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.229610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.229636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.010 qpair failed and we were unable to recover it. 00:25:12.010 [2024-07-15 19:19:52.229818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.010 [2024-07-15 19:19:52.229847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.230069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.230098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.230320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.230347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.230536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.230565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.230723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.230751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.230944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.230972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.231114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.231144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.231350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.231376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.231578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.231605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.231800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.231828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.232028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.232055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.232253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.232280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.232471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.232501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.232714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.232743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.232935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.232963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.233134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.233167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.233329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.233358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.233570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.233596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.233810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.233839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.234044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.234074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.234267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.234293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.234450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.234479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.234662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.234690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.234871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.234907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.235090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.235119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.235307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.235336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.235500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.235526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.235693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.235719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.235903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.235929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.236107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.236133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.236326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.236354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.236497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.236526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.236717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.236743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.236929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.011 [2024-07-15 19:19:52.236959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.011 qpair failed and we were unable to recover it. 00:25:12.011 [2024-07-15 19:19:52.237116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.237146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.237331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.237358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.237571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.237600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.237788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.237817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.238015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.238043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.238243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.238272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.238467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.238494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.238666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.238692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.238886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.238916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.239105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.239136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.239323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.239349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.239571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.239600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.239771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.239801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.239998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.240025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.240215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.240243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.240426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.240455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.240639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.240665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.240841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.240867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.241059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.241088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.241305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.241331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.241497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.241526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.241704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.241738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.241901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.241929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.242101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.242127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.242327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.242357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.242518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.242545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.242734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.242764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.242977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.012 [2024-07-15 19:19:52.243004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.012 qpair failed and we were unable to recover it. 00:25:12.012 [2024-07-15 19:19:52.243169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.243195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.243329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.243356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.243516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.243543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.243735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.243761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.243957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.243988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.244200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.244229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.244448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.244474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.244676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.244705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.244893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.244924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.245118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.245145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.245297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.245323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.245509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.245538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.245737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.245763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.245981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.246011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.246222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.246251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.246473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.246499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.246661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.246692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.246843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.246872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.247073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.247100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.247291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.247321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.247516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.247546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.247741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.247767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.247960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.247990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.248178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.248207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.248371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.248397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.248614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.248642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.013 [2024-07-15 19:19:52.248859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.013 [2024-07-15 19:19:52.248896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.013 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.249056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.249082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.249272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.249301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.249483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.249512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.249684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.249712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.249898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.249929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.250139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.250168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.250337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.250368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.250537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.250564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.250786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.250815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.251008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.251035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.251223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.251252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.251440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.251468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.251656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.251682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.251869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.251908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.252101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.252127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.252340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.252367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.252590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.252619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.252772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.252800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.252986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.253013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.253193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.253222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.253378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.253407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.253597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.253625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.253811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.253841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.254030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.254060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.254252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.254278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.254470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.254500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.014 qpair failed and we were unable to recover it. 00:25:12.014 [2024-07-15 19:19:52.254713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.014 [2024-07-15 19:19:52.254742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.254905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.254933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.255124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.255154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.255317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.255346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.255536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.255562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.255761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.255787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.255960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.255990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.256187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.256214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.256364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.256390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.256556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.256581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.256750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.256776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.256990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.257021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.257185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.257213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.257373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.257399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.257542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.257586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.257737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.257766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.257953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.257980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.258154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.258180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.258381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.258407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.258644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.258670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.258890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.258924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.259143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.259172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.259338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.259364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.259527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.259571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.259753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.259782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.259997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.260024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.260239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.015 [2024-07-15 19:19:52.260267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.015 qpair failed and we were unable to recover it. 00:25:12.015 [2024-07-15 19:19:52.260476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.260504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.260696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.260723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.260914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.260944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.261169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.261195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.261360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.261386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.261581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.261610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.261789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.261818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.262016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.262042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.262222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.262251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.262443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.262469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.262671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.262697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.262860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.262897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.263091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.263119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.263285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.263311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.263533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.263562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.263780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.263809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.264031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.264057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.264290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.264319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.264516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.264542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.016 [2024-07-15 19:19:52.264689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.016 [2024-07-15 19:19:52.264717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.016 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.264860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.264895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.265125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.265155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.265322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.265349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.265525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.265551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.265737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.265766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.265962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.265989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.266184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.266213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.266386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.266415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.266582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.266609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.266796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.266825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.267022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.267052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.267232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.267258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.267475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.267504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.267711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.267744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.267957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.267984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.268171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.268200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.268380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.268409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.268619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.268645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.268829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.268858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.269063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.269092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.269247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.269274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.269445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.269471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.269641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.269667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.269806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.269833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.270045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.270072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.270269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.270298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.017 [2024-07-15 19:19:52.270517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.017 [2024-07-15 19:19:52.270543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.017 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.270746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.270775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.270980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.271008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.271180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.271206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.271369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.271400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.271582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.271611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.271806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.271833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.272006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.272033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.272218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.272247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.272437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.272464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.272686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.272714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.272924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.272954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.273117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.273144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.273332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.273362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.273553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.273583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.273781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.273807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.273996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.274025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.274215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.274244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.274471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.274497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.274693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.274721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.274916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.274943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.275112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.275138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.275306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.275333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.275566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.275592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.275787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.275813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.276005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.276036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.276243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.276272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.276457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.018 [2024-07-15 19:19:52.276487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.018 qpair failed and we were unable to recover it. 00:25:12.018 [2024-07-15 19:19:52.276703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.276732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.276914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.276944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.277131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.277158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.277350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.277380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.277568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.277598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.277793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.277819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.278038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.278067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.278256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.278286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.278504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.278529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.278679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.278708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.278892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.278936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.279109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.279135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.279365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.279391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.279572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.279598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.279768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.279794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.279990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.280019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.280208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.280237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.280455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.280481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.280670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.280699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.280886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.280915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.281079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.281105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.281298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.281324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.281494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.281524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.281708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.281734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.281872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.281922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.282145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.282173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.282372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.282398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.282588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.282617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.282798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.282827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.283033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.283060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.283257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.283286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.019 [2024-07-15 19:19:52.283445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.019 [2024-07-15 19:19:52.283474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.019 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.283637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.283664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.283855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.283894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.284096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.284125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.284296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.284322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.284510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.284536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.284699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.284728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.284914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.284941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.285160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.285194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.285360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.285390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.285603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.285630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.285826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.285855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.286029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.286059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.286223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.286249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.286442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.286468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.286675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.286704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.286885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.286915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.287102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.287128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.287263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.287307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.287498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.287524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.287739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.287768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.287938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.287969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.288168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.288195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.288375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.288401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.288569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.288595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.288787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.288813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.289035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.289065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.289231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.289261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.289479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.289505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.289692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.289721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.289909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.289940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.290160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.290186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.290337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.290366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.290553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.290581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.290744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.290770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.020 [2024-07-15 19:19:52.290991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.020 [2024-07-15 19:19:52.291020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.020 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.291234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.291263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.291449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.291476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.291666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.291694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.291838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.291867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.292093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.292120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.292318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.292347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.292499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.292529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.292744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.292770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.292918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.292947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.293161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.293190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.293360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.293386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.293557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.293583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.293778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.293811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.293989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.294017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.294214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.294243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.294454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.294483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.294675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.294702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.294890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.294920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.295107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.295136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.295301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.295327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.295541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.295570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.295763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.295789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.295960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.295987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.296151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.296194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.296384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.296412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.296579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.296605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.296783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.296809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.296980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.297007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.297146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.297173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.297386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.297415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.297576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.297606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.297771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.297797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.297989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.298019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.298237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.298266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.298458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.298484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.021 [2024-07-15 19:19:52.298637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.021 [2024-07-15 19:19:52.298666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.021 qpair failed and we were unable to recover it. 00:25:12.022 [2024-07-15 19:19:52.298842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.022 [2024-07-15 19:19:52.298871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.022 qpair failed and we were unable to recover it. 00:25:12.022 [2024-07-15 19:19:52.299068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.022 [2024-07-15 19:19:52.299094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.022 qpair failed and we were unable to recover it. 00:25:12.022 [2024-07-15 19:19:52.299311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.022 [2024-07-15 19:19:52.299340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.022 qpair failed and we were unable to recover it. 00:25:12.022 [2024-07-15 19:19:52.299533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.022 [2024-07-15 19:19:52.299562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.022 qpair failed and we were unable to recover it. 00:25:12.022 [2024-07-15 19:19:52.299757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.022 [2024-07-15 19:19:52.299784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.022 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.299931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.299959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.300133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.300159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.300331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.300357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.300501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.300527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.300717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.300743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.300935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.300963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.301176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.301205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.301416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.301445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.301613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.301639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.301826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.301855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.302062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.302091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.302254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.302285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.302483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.302509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.302712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.302740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.302961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.302988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.303183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.303214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.303403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.303431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.303604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.303630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.303817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.303846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.304045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.304072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.304217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.304243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.304431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.304460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.304649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.304677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.304864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.304900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.305109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.305138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.305305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.305335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.305549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.305576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.305768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.305796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.305982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.306013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.306199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.306225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.306417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.306446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.306641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.306667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.306836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.306862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.307055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.307084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.307264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.307293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.307460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.307487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.307670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.307698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.307888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.307918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.308082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.308109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.308283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.023 [2024-07-15 19:19:52.308309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.023 qpair failed and we were unable to recover it. 00:25:12.023 [2024-07-15 19:19:52.308496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.308526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.308690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.308717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.308932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.308962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.309144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.309173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.309343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.309369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.309534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.309564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.309772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.309800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.310016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.310043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.310255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.310284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.310438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.310467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.310661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.310687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.310861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.310900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.311095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.311125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.311341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.311368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.311578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.311607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.311786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.311815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.312009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.312036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.312254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.312283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.312492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.312520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.312732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.312758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.312962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.312989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.313121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.313147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.313371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.313397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.313584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.313612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.313795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.313824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.314000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.314027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.314212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.314242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.314422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.314451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.314660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.314686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.314873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.314911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.315089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.315118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.315307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.315333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.315482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.315508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.315654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.315698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.315890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.024 [2024-07-15 19:19:52.315917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.024 qpair failed and we were unable to recover it. 00:25:12.024 [2024-07-15 19:19:52.316141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.316170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.316355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.316384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.316577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.316603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.316825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.316854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.317042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.317071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.317296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.317323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.317516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.317544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.317850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.317929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.318132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.318158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.318317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.318346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.318502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.318531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.318751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.318777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.318988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.319017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.319181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.319209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.319373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.319399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.319610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.319639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.319827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.319860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.320028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.320054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.320238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.320268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.320480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.320508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.320700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.320726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.320909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.320939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.321120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.321149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.321336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.321362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.321553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.321582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.321739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.025 [2024-07-15 19:19:52.321768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.025 qpair failed and we were unable to recover it. 00:25:12.025 [2024-07-15 19:19:52.321917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.321944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.322157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.322186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.322380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.322409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.322598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.322624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.322822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.322851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.323048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.323077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.323291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.323317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.323538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.323567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.323757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.323785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.323987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.324014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.324172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.324202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.324422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.324451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.324622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.324648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.324812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.324838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.325008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.325039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.325228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.325255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.325416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.325444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.325593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.325627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.325792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.325820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.325986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.326016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.326190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.326219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.326383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.326409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.326590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.326619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.326805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.326833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.327057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.327084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.327243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.327272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.026 [2024-07-15 19:19:52.327460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.026 [2024-07-15 19:19:52.327488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.026 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.327679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.327705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.327867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.327904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.328079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.328108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.328266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.328292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.328471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.328497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.328692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.328718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.328903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.328932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.329094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.329124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.329339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.329368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.329580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.329606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.329791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.330031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.330061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.330236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.330262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.330443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.330472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.330685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.330714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.330915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.330942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.331089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.331115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.331312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.331338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.331563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.331588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.331779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.331809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.332033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.332060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.332223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.332249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.332444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.332473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.332662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.332688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.332853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.332886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.333109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.027 [2024-07-15 19:19:52.333137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.027 qpair failed and we were unable to recover it. 00:25:12.027 [2024-07-15 19:19:52.333341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.333370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.333560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.333587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.333779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.333808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.333962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.333992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.334188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.334219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.334439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.334467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.334645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.334674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.334860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.334894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.335110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.335139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.335348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.335377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.335588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.335614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.335808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.335837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.336058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.336088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.336243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.336269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.336455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.336484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.336670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.336699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.336917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.336943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.337162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.337191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.337383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.337412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.028 qpair failed and we were unable to recover it. 00:25:12.028 [2024-07-15 19:19:52.337624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.028 [2024-07-15 19:19:52.337650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.337857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.337895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.338121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.338147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.338316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.338343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.338566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.338592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.338771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.338800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.338994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.339021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.339204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.339233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.339446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.339475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.339671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.339697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.339846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.339874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.340084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.340112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.340303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.340330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.340481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.340511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.340696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.340725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.341025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.341052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.341242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.341271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.341485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.341513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.341736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.341761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.341956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.341986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.342167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.342197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.342413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.342439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.342632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.342661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.342847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.342887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.343070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.343096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.343290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.029 [2024-07-15 19:19:52.343323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.029 qpair failed and we were unable to recover it. 00:25:12.029 [2024-07-15 19:19:52.343551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.343577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.343742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.343769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.343961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.343991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.344180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.344209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.344374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.344401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.344617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.344646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.344797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.344825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.345004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.345031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.345217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.345246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.345459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.345488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.345657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.345683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.345830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.345856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.346044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.346073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.346248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.346274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.346445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.346471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.346628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.346657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.346850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.346882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.347107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.347136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.347323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.347353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.347567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.347593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.347791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.347820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.348035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.348065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.348263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.348290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.348475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.348504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.348679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.348708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.348931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.348958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.349196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.030 [2024-07-15 19:19:52.349224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.030 qpair failed and we were unable to recover it. 00:25:12.030 [2024-07-15 19:19:52.349406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.349435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.349650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.349676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.349910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.349936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.350106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.350132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.350308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.350334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.350518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.350547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.350734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.350763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.350955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.350984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.351128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.351170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.351332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.351361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.351577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.351603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.351800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.351830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.352035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.352070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3412200 Killed "${NVMF_APP[@]}" "$@" 00:25:12.031 [2024-07-15 19:19:52.352234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.352262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.352430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.352473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:12.031 [2024-07-15 19:19:52.352686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.352716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:12.031 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.031 [2024-07-15 19:19:52.352930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.352958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:12.031 [2024-07-15 19:19:52.353154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.031 [2024-07-15 19:19:52.353184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.353396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.353425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.353640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.353667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.353816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.353842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.354016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.354044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.354188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.354214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.354433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.354462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.031 [2024-07-15 19:19:52.354644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.031 [2024-07-15 19:19:52.354673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.031 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.354865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.354899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.355083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.355112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.355298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.355327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.355511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.355537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.355758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.355787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.355976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.356006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.356196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.356222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.356411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.356441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.356656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.356682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3412752 00:25:12.032 [2024-07-15 19:19:52.356873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:12.032 [2024-07-15 19:19:52.356910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3412752 00:25:12.032 [2024-07-15 19:19:52.357133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.357163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3412752 ']' 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.032 [2024-07-15 19:19:52.357380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.357409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.032 [2024-07-15 19:19:52.357602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.357629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.032 [2024-07-15 19:19:52.357797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.357824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b9 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.032 0 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.357996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.358026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.358467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.358499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.358697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.358727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.358914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.358944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.359113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.359140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.359287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.359314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.359501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.359535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.359752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.359781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.359977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.360005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.360176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.360203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.360375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.360401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.360623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.360652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.360849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.360882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.361055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.361082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.361273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.361302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.361485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.361514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.032 qpair failed and we were unable to recover it. 00:25:12.032 [2024-07-15 19:19:52.361706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.032 [2024-07-15 19:19:52.361733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.361926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.361956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.362181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.362208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.362379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.362406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.362604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.362630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.362827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.362856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.363028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.363055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.363250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.363276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.363451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.363480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.363704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.363730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.363924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.363954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.364114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.364143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.364309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.364335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.364502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.364528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.364688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.364717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.364931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.364958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.365131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.365175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.365372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.365401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.365593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.365619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.365834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.365864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.366029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.366059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.366248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.366274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.366488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.366517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.366748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.366773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.366935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.366963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.367158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.367184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.367378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.367409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.367600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.367627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.367791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.367820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.368017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.368044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.368186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.368217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.368391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.368418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.368565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.368591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.368786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.368812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.369019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.369049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.369239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.369268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.369436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.369464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.369657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.369686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.369903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.369934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.370099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.370125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.033 [2024-07-15 19:19:52.370278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.033 [2024-07-15 19:19:52.370308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.033 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.370516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.370546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.370741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.370767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.370913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.370941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.371135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.371165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.371354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.371380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.371543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.371573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.371782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.371811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.371968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.371995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.372162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.372205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.372400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.372427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.372565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.372591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.372795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.372822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.373032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.373061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.373251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.373277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.373439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.373469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.373652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.373681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.373883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.373913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.374133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.374175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.374395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.374421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.374560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.374586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.374771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.374800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.374992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.375022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.375192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.375218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.375401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.375429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.375617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.375646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.375834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.375860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.376070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.376098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.376311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.376340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.376511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.376537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.376763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.376796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.376993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.377021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.377190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.377217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.377407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.377436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.377596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.377625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.377788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.377815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.034 [2024-07-15 19:19:52.377979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.034 [2024-07-15 19:19:52.378006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.034 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.378149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.378200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.378381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.378408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.378601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.378631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.378808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.378837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.379020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.379046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.379244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.379274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.379492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.379522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.379718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.379744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.379903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.379933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.380144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.380184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.380354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.380380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.380571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.380600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.380816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.380844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.381151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.381187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.381363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.381389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.381581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.381610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.381796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.381825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.381989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.382016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.382189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.382215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.382362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.382388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.382585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.382614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.382814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.382840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.383038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.383065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.383268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.383297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.383460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.383486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.383682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.383708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.383906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.383950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.384111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.384139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.384322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.384348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.384540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.384569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.384746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.384774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.384960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.384986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.385150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.385189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.385392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.385435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.385655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.385682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.385907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.385937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.386126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.386155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.386310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.386337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.386524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.386553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.386740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.386769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.386959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.386987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.387171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.387200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.035 [2024-07-15 19:19:52.387407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.035 [2024-07-15 19:19:52.387435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.035 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.387595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.387621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.387804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.387832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.388030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.388057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.388228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.388255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.388456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.388485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.388682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.388709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.388906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.388949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.389145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.389190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.389415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.389444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.389662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.389688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.389904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.389933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.390116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.390146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.390337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.390363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.390547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.390576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.390766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.390795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.391003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.391030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.391245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.391274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.391488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.391517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.391682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.391708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.391898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.391928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.392138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.392175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.392369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.392395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.392590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.392619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.392780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.392808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.393001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.393028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.393212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.393240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.393439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.393466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.393678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.393704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.393873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.393914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.394097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.394124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.394342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.394372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.394587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.394615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.394797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.394825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.395016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.395043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.395209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.395237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.395440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.395467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.395657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.395683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.395917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.395946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.396140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.396177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.396311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.396337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.396537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.396566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.396716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.396744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.396942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.396969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.397125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.036 [2024-07-15 19:19:52.397154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.036 qpair failed and we were unable to recover it. 00:25:12.036 [2024-07-15 19:19:52.397332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.397360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.397572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.397598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.397785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.397813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.397983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.398010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.398198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.398225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.398437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.398464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.398660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.398687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.398916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.398943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.399117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.399144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.399324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.399352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.399526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.399553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.399737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.399764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.399906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.399951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.400120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.400147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.400317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.400344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.400493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.400519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.400691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.400716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.400911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.400937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.401107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.401142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.401299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.401332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.401520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.401553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.401708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.401741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.401898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.401926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.037 [2024-07-15 19:19:52.402074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.037 [2024-07-15 19:19:52.402100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.037 qpair failed and we were unable to recover it. 00:25:12.319 [2024-07-15 19:19:52.402268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.319 [2024-07-15 19:19:52.402295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.319 qpair failed and we were unable to recover it. 00:25:12.319 [2024-07-15 19:19:52.402465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.319 [2024-07-15 19:19:52.402492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.319 qpair failed and we were unable to recover it. 00:25:12.319 [2024-07-15 19:19:52.402663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.319 [2024-07-15 19:19:52.402694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.319 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.402831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.402857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.403011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.403049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.403184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.403210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.403353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.403380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.403547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.403574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.403745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.403772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.403917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.403944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.404093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.404120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.404282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.404308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.404489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.404515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.404682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.404708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.404846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.404892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.405058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.405084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.405082] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:12.320 [2024-07-15 19:19:52.405189] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.320 [2024-07-15 19:19:52.405230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.405255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.405398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.405432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.405624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.405650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.405797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.405823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.405961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.405988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.406148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.406174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.406320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.406347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.406520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.406546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.406741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.406768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.406968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.406995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.407162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.407189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.407352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.407378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.407576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.407603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.407741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.407767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.407970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.407998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.408142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.408168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.408338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.408364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.408537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.408564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.408766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.408792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.408984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.409011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.320 [2024-07-15 19:19:52.409183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.320 [2024-07-15 19:19:52.409210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.320 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.409393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.409419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.409614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.409640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.409828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.409854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.410046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.410072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.410250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.410281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.410420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.410446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.410614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.410642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.410813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.410840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.411020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.411047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.411218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.411244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.411386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.411413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.411544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.411570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.411762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.411788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.411980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.412007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.412170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.412196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.412337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.412364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.412505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.412532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.412728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.412754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.412926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.412954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.413128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.413154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.413324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.413350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.413543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.413570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.413710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.413735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.413890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.413917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.414087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.414114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.414286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.414312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.414504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.414530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.414670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.414705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.414898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.414925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.415097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.415133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.415297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.415323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.415477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.415504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.415700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.415726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.415896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.415923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.416065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.416092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.416232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.416258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.416388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.416414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.416554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.416581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.416751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.416777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.416946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.416973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.417140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.417166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.417316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.417342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.417512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.417538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.321 [2024-07-15 19:19:52.417707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.321 [2024-07-15 19:19:52.417733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.321 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.417929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.417960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.418153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.418179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.418355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.418381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.418574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.418601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.418793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.418818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.418989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.419016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.419181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.419208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.419396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.419421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.419595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.419621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.419797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.419823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.419993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.420021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.420191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.420217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.420395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.420420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.420615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.420641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.420816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.420843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.421059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.421085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.421246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.421272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.421461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.421487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.421654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.421680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.421848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.421875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.422069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.422095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.422257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.422284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.422422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.422448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.422638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.422664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.422811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.422839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.423017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.423045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.423186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.423211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.423384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.423410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.423553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.423580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.423744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.423770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.423937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.423964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.424124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.424150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.424317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.424343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.424486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.424513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.424677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.322 [2024-07-15 19:19:52.424703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.322 qpair failed and we were unable to recover it. 00:25:12.322 [2024-07-15 19:19:52.424842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.424885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.425034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.425061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.425228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.425254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.425424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.425451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.425626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.425652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.425843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.425873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.426082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.426109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.426258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.426285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.426487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.426513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.426661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.426687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.426858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.426892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.427026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.427053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.427219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.427245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.427390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.427417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.427585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.427612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.427805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.427831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.428011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.428038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.428205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.428231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.428401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.428426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.428605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.428631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.428778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.428805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.429010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.429037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.429206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.429232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.429406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.429432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.429569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.429595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.429793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.429819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.429973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.430000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.430198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.430224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.430393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.430419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.430582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.430608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.430754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.430780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.430948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.430975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.431175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.431201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.431364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.431391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.431564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.431591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.431733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.431758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.431938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.431964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.432135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.432161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.432325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.432351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.432519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.432545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.432696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.432722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.323 [2024-07-15 19:19:52.432900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.323 [2024-07-15 19:19:52.432928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.323 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.433131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.433158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.433361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.433387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.433555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.433581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.433727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.433758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.433954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.433981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.434126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.434157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.434329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.434355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.434528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.434555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.434726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.434753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.434947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.434974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.435146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.435173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.435338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.435364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.435538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.435564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.435728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.435755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.435903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.435930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.436073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.436101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.436244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.436270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.436418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.436444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.436589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.436615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.436776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.436802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.436986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.437013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.437178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.437207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.437351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.437377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.437584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.437610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.437771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.437798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.437966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.437992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.438133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.438160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.438303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.438330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.324 qpair failed and we were unable to recover it. 00:25:12.324 [2024-07-15 19:19:52.438475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.324 [2024-07-15 19:19:52.438501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.438699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.438725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.438897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.438924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.439072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.439098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.439240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.439272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.439466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.439492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.439636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.439662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.439832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.439858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.440066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.440093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.440286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.325 [2024-07-15 19:19:52.440312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.440485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.440511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.440705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.440731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.440924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.440951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.441122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.441147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.441288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.441314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.441475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.441505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.441679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.441705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.441853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.441885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.442088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.442114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.442258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.442285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.442459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.442485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.442625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.442650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.442786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.442812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.442984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.443010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.443177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.443203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.443377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.443403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.443567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.443593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.443732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.443758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.443911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.443939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.444112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.444139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.444320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.444346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.444515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.444541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.444711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.444737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.444944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.444971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.445116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.445142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.445320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.445346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.445540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.445566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.445732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.445758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.445924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.445951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.325 [2024-07-15 19:19:52.446102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.325 [2024-07-15 19:19:52.446128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.325 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.446272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.446299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.446470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.446497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.446663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.446689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.446838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.446882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.450905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.450941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.451130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.451163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.451368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.451398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.451560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.451590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.451800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.451830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.452017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.452047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.452266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.452296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.452473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.452501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.452644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.452672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.452890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.452920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.453101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.453131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.453355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.453387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.453561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.453591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.453796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.453826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.453980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.454010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.454199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.454227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.454428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.454456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.454633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.454661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.454842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.454886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.455064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.455093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.455298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.455326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.455529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.455558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.455706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.455734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.455934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.455962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.456163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.456193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.456384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.456413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.456613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.456642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.456813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.456841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.457065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.457095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.457302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.457331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.457516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.457545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.457722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.457750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.457910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.457937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.458119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.458147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.458307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.458334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.458533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.458563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.458769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.458798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.326 [2024-07-15 19:19:52.459005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.326 [2024-07-15 19:19:52.459035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.326 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.459213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.459251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.459425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.459454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.459633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.459663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.461903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.461935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.462121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.462151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.462330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.462358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.462579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.462609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.462772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.462800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.462994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.463023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.463229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.463258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.463413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.463442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.463624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.463653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.463844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.463874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.464100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.464133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.464338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.464367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.464564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.464592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.464831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.464860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.465027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.465055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.465201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.465228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.465420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.465449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.465606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.465634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.465777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.465805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.466003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.466033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.466227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.466255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.466497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.466526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.466688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.466717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.467915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.467953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.468178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.468211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.468373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.468404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.468598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.468627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.468823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.468854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.469024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.469054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.469238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.469267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.469449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.469480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.469682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.327 [2024-07-15 19:19:52.469713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.327 qpair failed and we were unable to recover it. 00:25:12.327 [2024-07-15 19:19:52.469895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.469926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.472902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.472945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.473172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.473203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.473430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.473459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.473638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.473667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.473820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.473848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.474051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.474080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.474300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.474329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.474481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.474508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.474689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.474718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.474926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.474956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.475108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.475134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.475352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.475380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.475563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.475591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.475771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.475799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.476000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.476029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.476199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.476227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.476364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.476392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.476530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.476562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.476739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.476767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.476923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.476951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.477152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.477184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.477363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.477390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.477531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.477558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.477757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.477785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.477928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.477956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.478001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:12.328 [2024-07-15 19:19:52.478110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.478138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.478326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.478354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.478524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.478551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.478731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.478758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.481890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.481938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.482196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.482233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.482424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.482454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.482638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.482668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.482885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.328 [2024-07-15 19:19:52.482917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.328 qpair failed and we were unable to recover it. 00:25:12.328 [2024-07-15 19:19:52.483179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.483212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.483450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.483479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.483678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.483707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.483885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.483915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.484108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.484138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.484332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.484360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.484617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.484647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.484904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.484934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.485120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.485151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.485411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.485440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.485681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.485712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.485895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.485928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.486088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.486117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.486306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.486337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.486497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.486526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.486710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.486743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.486903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.486933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.487119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.487148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.487371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.487400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.487579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.487607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.487776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.487804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.487957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.487985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.488245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.488273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.488511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.488540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.488728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.488757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.491513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.491545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.491779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.491809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.492049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.492081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.492297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.492328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.492479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.492508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.492688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.492718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.492951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.492982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.493196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.493227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.493420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.493450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.493604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.493635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.493819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.493848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.494063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.494099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.494294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.494323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.494504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.494534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.494758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.494788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.494984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.495013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.495195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.495225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.495412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.329 [2024-07-15 19:19:52.495444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.329 qpair failed and we were unable to recover it. 00:25:12.329 [2024-07-15 19:19:52.495620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.495649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.495834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.495872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.496086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.496116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.496273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.496301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.496483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.496512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.496692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.496721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.497920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.497953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.498206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.498237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.498444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.498474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.498669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.498698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.498889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.498918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.499097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.499126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.499372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.499400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.499587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.499624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.499813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.499848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.500051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.500088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.500322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.500362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.500566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.500606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.500806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.500844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.501065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.501104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.501427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.501471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.501665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.501692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.501831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.501858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.502012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.502039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.502241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.502268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.502418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.502445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.502622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.502649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.502826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.502854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.503030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.503057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.503225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.503252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.503445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.503472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.503669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.503696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.503867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.503900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.504099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.504125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.504330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.504356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.504499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.504526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.504674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.504700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.504851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.504886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.505086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.505112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.505283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.505311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.505486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.505513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.505677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.505704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.505866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.505901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.506054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.506081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.330 [2024-07-15 19:19:52.506295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.330 [2024-07-15 19:19:52.506321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.330 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.506502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.506527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.506668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.506694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.506863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.506901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.507073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.507100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.507278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.507305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.507480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.507506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.507678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.507704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.507873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.507906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.508075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.508102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.508278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.508306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.508440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.508467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.508665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.508692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.508857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.508890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.509089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.509116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.509312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.509338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.509489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.509516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.509690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.509717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.509860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.509894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.510069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.510095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.510259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.510285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.510462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.510488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.510634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.510660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.510829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.510855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.511001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.511027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.511167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.511193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.511358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.511384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.511584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.511611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.511774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.511800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.511969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.511996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.512131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.512172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.512349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.512376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.512568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.512594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.512746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.512774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.512944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.512972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.513138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.513174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.513341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.513368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.513540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.513566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.513709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.513736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.513908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.513935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.514133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.514160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.514353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.514379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.514546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.514573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.514739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.514765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.514912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.514939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.331 [2024-07-15 19:19:52.515114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.331 [2024-07-15 19:19:52.515140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.331 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.515304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.515331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.515471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.515497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.515695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.515722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.515886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.515912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.516084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.516111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.516305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.516331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.516502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.516528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.516669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.516696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.516834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.516871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.517077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.517103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.517269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.517295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.517436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.517463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.517637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.517664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.517838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.517865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.518057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.518084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.518233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.518260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.518423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.518449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.518645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.518672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.518836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.518864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.519041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.519068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.519243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.519271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.519465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.519500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.519668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.519695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.519827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.519854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.520031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.520057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.520221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.520252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.520420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.520446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.520618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.520645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.520845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.520871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.521079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.521105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.521245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.521272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.521446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.521472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.521639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.521666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.521836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.521862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.522014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.522041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.522238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.522264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.522443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.522469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.332 [2024-07-15 19:19:52.522647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.332 [2024-07-15 19:19:52.522673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.332 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.522869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.522901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.523102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.523129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.523271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.523298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.523445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.523471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.523639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.523665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.523807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.523834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.524036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.524062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.524203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.524229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.524370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.524396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.524536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.524563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.524717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.524744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.524904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.524931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.525127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.525154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.525325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.525351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.525546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.525572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.525717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.525744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.525945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.525972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.526141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.526177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.526344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.526370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.526545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.526572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.526737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.526764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.526957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.526984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.527180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.527207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.527345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.527372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.527542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.527569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.527740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.527767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.527938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.527965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.528132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.528158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.528389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.528431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.528593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.528621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.528766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.528794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.528965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.528993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.529170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.529198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.529387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.529416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.529557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.529584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.529782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.529809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.529973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.530000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.530171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.530197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.530342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.530370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.530546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.530573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.530744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.530773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.530947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.530975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.531147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.531174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.531353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.531380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.333 [2024-07-15 19:19:52.531555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.333 [2024-07-15 19:19:52.531582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.333 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.531755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.531783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.531965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.531993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.532179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.532207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.532398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.532425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.532596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.532633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.532805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.532832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.533011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.533039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.533182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.533210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.533394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.533427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.533624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.533652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.533824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.533852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.534022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.534050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.534255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.534282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.534427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.534454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.534652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.534679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.534848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.534894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.535093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.535122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.535294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.535321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.535492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.535519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.535721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.535748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.535889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.535916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.536060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.536087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.536285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.536312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.536478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.536510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.536708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.536735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.536881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.536909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.537080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.537107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.537280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.537308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.537578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.537604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.537786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.537812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.537961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.537989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.538135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.538163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.538332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.538358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.538597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.538624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.538829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.538855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.539037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.539064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.539239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.539265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.539439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.539467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.539633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.539661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.539834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.539885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.540071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.540098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.540275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.540302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.540534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.540561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.540732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.334 [2024-07-15 19:19:52.540759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.334 qpair failed and we were unable to recover it. 00:25:12.334 [2024-07-15 19:19:52.540921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.540949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.541121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.541148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.541318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.541345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.541513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.541540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.541692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.541719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.541890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.541918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.542099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.542126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.542309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.542337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.542536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.542563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.542736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.542763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.542968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.542995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.543175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.543202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.543367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.543394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.543648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.543675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.543847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.543889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.544061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.544088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.544285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.544311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.544497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.544528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.544716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.544743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.544917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.544949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.545116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.545143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.545360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.545387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.545535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.545562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.545707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.545735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.545980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.546007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.546178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.546204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.546386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.546413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.546590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.546617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.546788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.546815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.546990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.547018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.547176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.547204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.547405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.547431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.547608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.547635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.547913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.547940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.548105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.548132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.548304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.548331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.548482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.548509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.548658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.548686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.548889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.548916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.549082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.549109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.549314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.549341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.549516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.549542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.549684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.549711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.549903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.335 [2024-07-15 19:19:52.549930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.335 qpair failed and we were unable to recover it. 00:25:12.335 [2024-07-15 19:19:52.550077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.550104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.550281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.550308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.550512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.550539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.550734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.550760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.550931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.550959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.551097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.551124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.551337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.551364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.551532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.551559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.551739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.551767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.551961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.551989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.552152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.552187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.552330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.552358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.552530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.552557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.552728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.552756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.552929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.552958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.553157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.553201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.553377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.553406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.553607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.553634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.553829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.553857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.554048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.554076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.554266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.554294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.554492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.554521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.554694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.554722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.554926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.554954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.555126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.555154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.555305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.555332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.555480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.555515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.555710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.555737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.555911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.555939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.556139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.556178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.556348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.556376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.556546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.556574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.556745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.556772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.556952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.556980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.557155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.557183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.557368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.557395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.557575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.557602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.557806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.557833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.336 [2024-07-15 19:19:52.558055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.336 [2024-07-15 19:19:52.558104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.336 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.558359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.558404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.558663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.558704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.558902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.558931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.559111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.559137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.559314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.559340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.559516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.559544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.559742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.559770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.559944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.559972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.560115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.560142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.560315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.560342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.560488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.560515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.560718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.560745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.560905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.560934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.561135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.561163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.561358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.561386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.561556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.561583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.561751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.561787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.561991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.562019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.562198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.562225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.562395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.562422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.562613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.562641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.562807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.562835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.563016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.563044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.563184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.563212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.563382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.563411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.563559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.563586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.563757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.563783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.563939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.563968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.564134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.564160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.564323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.564351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.564528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.564558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.564759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.564786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.564970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.564998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.565167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.565199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.565392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.565420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.565585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.565805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.565833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.566036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.566064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.566230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.566257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.566424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.566452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.566646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.566673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.566865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.566898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.567038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.567066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.567243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.337 [2024-07-15 19:19:52.567270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.337 qpair failed and we were unable to recover it. 00:25:12.337 [2024-07-15 19:19:52.567467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.567494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.567691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.567718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.567888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.567916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.568079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.568105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.568276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.568302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.568494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.568521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.568689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.568715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.568913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.568940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.569107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.569134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.569277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.569303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.569475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.569501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.569675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.569701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.569899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.569931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.570076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.570102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.570282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.570308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.570481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.570508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.570677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.570705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.570899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.570926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.571078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.571105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.571305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.571333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.571513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.571541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.571707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.571735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.571931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.571959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.572127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.572154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.572328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.572355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.572527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.572554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.572731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.572759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.572934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.572962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.573160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.573187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.573362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.573389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.573563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.573590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.573783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.573810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.573985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.574014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.574184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.574212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.574406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.574433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.574604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.574631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.574840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.574884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.575054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.575081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.575238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.575265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.575434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.575461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.575653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.575681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.575849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.575882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.576077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.576104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.338 qpair failed and we were unable to recover it. 00:25:12.338 [2024-07-15 19:19:52.576278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.338 [2024-07-15 19:19:52.576305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.576476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.576503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.576674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.576701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.576867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.576922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.577072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.577099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.577266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.577293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.577469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.577497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.577666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.577693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.577840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.577895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.578069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.578100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.578279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.578307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.578504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.578531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.578702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.578728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.578901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.578929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.579103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.579130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.579298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.579325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.579518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.579545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.579738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.579765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.579975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.580002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.580197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.580224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.580363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.580390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.580563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.580590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.580786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.580813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.580998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.581026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.581199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.581227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.581397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.581423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.581601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.581629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.581805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.581832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.582009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.582037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.582216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.582242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.582411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.582438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.582578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.582606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.582778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.582805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.583004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.583032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.583202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.583229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.583364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.583391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.583577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.583604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.583780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.583808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.583962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.583989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.584134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.584161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.584304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.584332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.584497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.584524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.584719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.584754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.584926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.584954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.339 [2024-07-15 19:19:52.585096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.339 [2024-07-15 19:19:52.585124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.339 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.585287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.585314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.585467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.585494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.585662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.585690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.585859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.585896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.586067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.586099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.586272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.586299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.586466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.586493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.586687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.586715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.586915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.586944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.587140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.587180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.587342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.587370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.587543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.587570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.587712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.587739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.587933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.587961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.588153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.588185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.588381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.588408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.588554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.588582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.588764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.588791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.588996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.589023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.589173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.589201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.589367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.589394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.589565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.589592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.589759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.589786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.589951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.589979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.590153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.590183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.590380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.590407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.590575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.590603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.590771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.590798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.590953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.590981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.591150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.591177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.591374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.591401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.591548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.591575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.591746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.591773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.591958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.591985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.592125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.592152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.592326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.592353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.592492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.592519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.592715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.340 [2024-07-15 19:19:52.592741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.340 qpair failed and we were unable to recover it. 00:25:12.340 [2024-07-15 19:19:52.592943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.592971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.593164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.593190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.593367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.593393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.593569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.593595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.593733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.593759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.593937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.593965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.594131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.594161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.594344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.594370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.594537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.594564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.594707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.594734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.594905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.594932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.595079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.595105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.595255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.595281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.595428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.595455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.595681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.595707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.595853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.595896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.596043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.596071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.596237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.596263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.596396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.341 [2024-07-15 19:19:52.596406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.596429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.341 [2024-07-15 19:19:52.596434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 [2024-07-15 19:19:52.596444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.596457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.341 [2024-07-15 19:19:52.596467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.341 [2024-07-15 19:19:52.596533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:12.341 [2024-07-15 19:19:52.596608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.596634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.596596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:12.341 [2024-07-15 19:19:52.596622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:12.341 [2024-07-15 19:19:52.596624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:12.341 [2024-07-15 19:19:52.596822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.596848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.597030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.597057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.597256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.597283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.597459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.597485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.597634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.597660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.597819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.597846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.598011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.598037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.598170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.598196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.598370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.598396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.598533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.598559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.598709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.598735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.598917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.598945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.599092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.599119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.599274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.599300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.599472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.599499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.599637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.599664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.599829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.599856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.600051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.600077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.600215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.600250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.600411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.600438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.600567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.341 [2024-07-15 19:19:52.600594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.341 qpair failed and we were unable to recover it. 00:25:12.341 [2024-07-15 19:19:52.600772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.600798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.600975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.601002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.601139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.601185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.601354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.601379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.601519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.601545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.601680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.601707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.601853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.601890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.602036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.602064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.602232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.602259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.602416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.602443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.602609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.602636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.602827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.602853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.603026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.603052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.603194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.603220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.603359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.603385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.603537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.603563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.603738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.603764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.603909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.603936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.604134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.604160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.604332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.604359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.604525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.604551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.604722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.604750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.604919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.604946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.605091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.605118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.605292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.605319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.605460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.605486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.605634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.605661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.605830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.605857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.606036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.606064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.606246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.606272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.606441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.606468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.606663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.606689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.606854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.606888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.607093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.607120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.607279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.607306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.607457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.607484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.607661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.607687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.607823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.607849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.608014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.608041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.608214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.608249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.608433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.608459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.608626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.608652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.608821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.608852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.609025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.609052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.342 [2024-07-15 19:19:52.609219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.342 [2024-07-15 19:19:52.609245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.342 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.609411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.609438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.609679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.609706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.609840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.609883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.610061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.610087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.610229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.610257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.610440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.610467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.610636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.610662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.610837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.610872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.611016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.611043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.611182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.611208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.611376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.611403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.611545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.611571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.611733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.611759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.611926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.611953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.612123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.612150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.612360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.612387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.612536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.612562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.612719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.612745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.612948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.612975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.613143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.613181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.613343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.613370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.613501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.613527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.613677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.613704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.613847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.613887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.614064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.614091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.614231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.614257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.614429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.614455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.614591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.614618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.614759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.614785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.614955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.614982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.615121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.615147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.615293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.615319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.615499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.615526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.615716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.615742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.615910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.615938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.616074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.616101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.616266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.616292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.616442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.616472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.616613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.616641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.616806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.616832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.616994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.617021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.617192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.617219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.617421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.617447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.617610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.343 [2024-07-15 19:19:52.617637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.343 qpair failed and we were unable to recover it. 00:25:12.343 [2024-07-15 19:19:52.617778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.617804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.617956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.617983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.618164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.618191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.618380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.618406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.618577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.618605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.618772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.618799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.618968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.618996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.619162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.619189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.619343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.619370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.619541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.619568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.619713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.619740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.619913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.619941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.620104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.620131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.620310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.620336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.620474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.620501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.620653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.620679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.620852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.620894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.621036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.621063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.621204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.621230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.621425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.621452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.621624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.621651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.621890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.621917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.622111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.622139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.622321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.622348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.622541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.622567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.622735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.622761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.622945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.622972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.623142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.623171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.623320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.623346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.623504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.623530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.623767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.623793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.623959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.623987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.624136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.624174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.624358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.624390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.624541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.624568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.624738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.624765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.624928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.624955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.625107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.625133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.344 qpair failed and we were unable to recover it. 00:25:12.344 [2024-07-15 19:19:52.625303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.344 [2024-07-15 19:19:52.625329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.625464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.625490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.625631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.625657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.625805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.625831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.625982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.626009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.626205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.626230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.626426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.626453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.626584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.626610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.626762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.626788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.626994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.627022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.627172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.627199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.627342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.627368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.627537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.627564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.627710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.627738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.627889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.627917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.628051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.628078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.628219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.628254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.628439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.628466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.628620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.628647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.628801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.628828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.628976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.629004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.629175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.629202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.629362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.629389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.629536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.629562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.629733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.629759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.629898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.629926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.630095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.630122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.630270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.630297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.630493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.630520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.630682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.630708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.630850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.630882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.631052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.631079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.631245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.631272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.631435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.631462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.631603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.631630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.631819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.631849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.632009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.632036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.632173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.632199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.632344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.632372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.632548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.632574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.632738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.632765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.632927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.632954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.633097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.633124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.633288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.633315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.345 [2024-07-15 19:19:52.633455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.345 [2024-07-15 19:19:52.633482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.345 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.633652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.633678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.633909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.633936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.634081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.634108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.634252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.634278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.634420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.634447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.634594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.634621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.634786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.634813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.634963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.634990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.635142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.635168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.635304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.635331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.635498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.635524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.635671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.635698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.635861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.635893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.636058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.636085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.636245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.636272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.636438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.636465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.636630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.636657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.636798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.636824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.636996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.637024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.637199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.637226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.637392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.637419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.637574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.637601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.637771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.637797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.637963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.637990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.638145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.638173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.638373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.638400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.638570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.638596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.638777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.638804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.638980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.639007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.639177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.639203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.639407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.639440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.639582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.639608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.639766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.639794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.639962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.639989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.640156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.640190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.640346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.640373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.640523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.640550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.640708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.640735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.640914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.640941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.641138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.641164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.641303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.641329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.641498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.641524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.641664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.641690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.641828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.346 [2024-07-15 19:19:52.641854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.346 qpair failed and we were unable to recover it. 00:25:12.346 [2024-07-15 19:19:52.642001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.642027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.642159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.642185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.642346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.642372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.642525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.642552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.642715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.642741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.642887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.642913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.643116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.643143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.643281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.643308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.643482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.643509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.643674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.643700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.643882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.643910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.644049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.644076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.644235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.644261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.644409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.644437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.644633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.644660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.644812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.644838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.645003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.645031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.645199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.645226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.645369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.645396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.645531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.645558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.645721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.645748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.645974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.646001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.646169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.646195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.646334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.646361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.646505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.646533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.646705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.646731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.646902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.646933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.647064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.647090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.647260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.647288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.647437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.647464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.647596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.647622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.647797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.647824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.647978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.648005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.648167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.648195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.648362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.648388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.648523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.648549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.648741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.648767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.648932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.648959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.649096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.649123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.649252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.649278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.649478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.649504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.649636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.649663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.649811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.649838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.650042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.650069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.347 [2024-07-15 19:19:52.650212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.347 [2024-07-15 19:19:52.650238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.347 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.650387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.650413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.650559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.650585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.650751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.650777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.650945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.650972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.651114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.651141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.651334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.651360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.651554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.651581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.651738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.651765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.651925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.651951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.652093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.652119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.652261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.652288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.652447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.652473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.652641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.652669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.652842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.652870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.653052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.653079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.653245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.653272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.653436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.653463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.653606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.653632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.653844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.653871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.654027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.654054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.654194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.654221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.654410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.654440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.654619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.654645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.654818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.654846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.655021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.655048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.655214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.655241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.655371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.655397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.655552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.655579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.655724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.655751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.655925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.655952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.656099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.656126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.656286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.656313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.656482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.656509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.656743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.656769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.656974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.657002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.657149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.657175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.657308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.348 [2024-07-15 19:19:52.657334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.348 qpair failed and we were unable to recover it. 00:25:12.348 [2024-07-15 19:19:52.657501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.657528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.657699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.657727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.657889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.657916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.658082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.658109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.658277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.658304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.658502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.658529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.658660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.658686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.658824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.658850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.659004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.659030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.659183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.659210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.659377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.659403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.659636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.659662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.659805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.659833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.659999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.660026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.660183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.660210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.660380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.660407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.660554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.660580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.660710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.660737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.660902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.660930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.661129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.661156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.661324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.661351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.661529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.661555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.661754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.661781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.661951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.661979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.662164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.662194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.662332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.662359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.662523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.662549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.662685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.662712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.662846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.662874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.663047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.663074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.663252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.663279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.663428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.663455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.663689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.663716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.663892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.663920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.664063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.664090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.664225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.664251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.664388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.664414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.664558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.664584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.664725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.664752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.664907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.664935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.665076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.665104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.665243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.665269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.665406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.665433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.665565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.665591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.349 qpair failed and we were unable to recover it. 00:25:12.349 [2024-07-15 19:19:52.665773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.349 [2024-07-15 19:19:52.665800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.665975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.666003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.666168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.666195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.666359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.666386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.666550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.666577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.666740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.666767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.666943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.666971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.667135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.667165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.667306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.667332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.667466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.667492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.667659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.667685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.667884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.667911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.668047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.668074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.668227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.668254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.668426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.668452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.668646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.668672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.668847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.668874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.669033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.669060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.669214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.669241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.669451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.669478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.669637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.669663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.669838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.669864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96a4000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.670039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.670077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.670225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.670252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.670392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.670418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.670554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.670581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.670736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.670762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.670915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.670942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.671079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.671106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.671264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.671290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.671524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.671551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.671691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.671717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.671892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.671918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.672062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.672089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.672255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.672281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.672414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.672440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.672612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.672638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.672788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.672813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.672956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.672984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.673128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.673155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.673300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.673326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.673477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.673503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.673672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.673699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.673846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.673873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.674032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.350 [2024-07-15 19:19:52.674059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.350 qpair failed and we were unable to recover it. 00:25:12.350 [2024-07-15 19:19:52.674197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.674224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.674400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.674427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.674568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.674599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.674755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.674781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.674951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.674977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.675173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.675199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.675361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.675387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.675535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.675561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.675730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.675757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.675908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.675935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.676097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.676122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.676276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.676302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.676439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.676465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.676638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.676664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.676793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.676819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.676989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.677016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.677154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.677180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.677321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.677347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.677499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.677525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.677688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.677714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.677854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.677886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.678042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.678069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.678233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.678259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.678425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.678451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.678584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.678610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.678747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.678773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.678964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.678991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.679152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.679177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.679314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.679339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.679477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.679503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.679670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.679697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.679861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.679893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.680068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.680095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.680273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.680299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.680472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.680498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.680630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.680656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.680819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.680845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.681028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.681057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.681256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.681282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.681423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.681449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.681583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.681609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.681800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.681826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.681961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.681992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.682156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.682182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.351 qpair failed and we were unable to recover it. 00:25:12.351 [2024-07-15 19:19:52.682321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.351 [2024-07-15 19:19:52.682347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.682510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.682536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.682677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.682703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.682870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.682902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.683067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.683093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.683230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.683257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.683421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.683447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.683605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.683631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.683793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.683819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.683962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.683988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.684132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.684158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.684311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.684337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.684511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.684537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.684730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.684756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.684938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.684965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.685104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.685129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.685294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.685320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.685489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.685515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.685657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.685687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.685851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.685885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.686023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.686049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.686242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.686268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.686409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.686435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.686596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.686622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.686775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.686801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.686978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.687005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.687147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.687172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.687313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.687341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.687511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.687537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.687698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.687724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.687886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.687913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.688058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.688085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.688255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.688281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.688448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.688474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.688643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.688669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.688808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.688834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.689004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.689031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.689203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.352 [2024-07-15 19:19:52.689230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.352 qpair failed and we were unable to recover it. 00:25:12.352 [2024-07-15 19:19:52.689398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.689428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.689558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.689584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.689755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.689782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.689959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.689986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.690122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.690149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.690279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.690305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.690466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.690492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.690672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.690697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.690864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.690896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.691044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.691070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.691236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.691261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.691403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.691431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.691625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.691651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.691791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.691817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.691975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.692002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.692174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.692200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.692360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.692386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.692544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.692570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.692706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.692732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.692905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.692932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.693094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.693120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.693254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.693281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.693424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.693451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.693583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.693610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.693750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.693776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.693945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.693973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.694135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.694161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.694316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.694342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.694479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.694506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.694687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.694713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.694849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.694883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.695032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.695058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.695195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.695222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.695386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.695412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.695542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.695568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.695729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.695755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.695920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.695947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.696100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.696127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.696325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.696350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.696507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.696533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.696691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.696721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.696890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.696916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.697069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.697095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.697232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.697260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.353 qpair failed and we were unable to recover it. 00:25:12.353 [2024-07-15 19:19:52.697430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.353 [2024-07-15 19:19:52.697456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.697590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.697616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.697809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.697835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.697991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.698019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.698154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.698181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.698343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.698369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.698535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.698561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.698725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.698751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.698890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.698917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.699058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.699084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.699249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.699275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.699423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.699449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.699592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.699619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.699793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.699819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.699968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.699995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.700158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.700189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.700354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.700380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.700521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.700548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.700737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.700763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.700949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.700977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.701160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.701187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.701323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.701350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.701494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.701520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.701659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.701685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.701836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.701862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.702021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.702048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.702182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.702209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.702385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.702411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.702579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.702606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.702765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.702791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.702927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.702953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.703101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.703127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.703299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.703325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.703462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.703489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.703659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.703685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.703856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.703887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.704020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.704050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.704188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.704214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.704407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.704433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.704596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.704621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.704786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.704812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.704973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.705133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.705159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.705355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.705381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.705548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.354 [2024-07-15 19:19:52.705574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.354 qpair failed and we were unable to recover it. 00:25:12.354 [2024-07-15 19:19:52.705736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.705762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.705896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.705923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.706102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.706128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.706264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.706290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.706466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.706492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.706634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.706661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.706809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.706835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.707012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.707039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.707202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.707228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.707395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.707421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.707553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.707579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.707748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.707775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.707929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.707956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.708151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.708189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.708355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.708381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.708528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.708554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.708726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.708752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.708915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.708942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.709141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.709167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.709314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.709340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.709482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.709508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.709697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.709723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.709870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.709912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.710107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.710133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.710313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.710339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.710484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.710510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.710648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.710675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.710816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.710843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.710987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.711014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.711180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.711206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.711371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.711397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.711545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.711575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.711718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.711745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.711933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.711961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.712106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.712132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.712303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.712330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.712501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.712527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.712662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.712688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.712822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.712848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.713001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.713028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.713162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.713188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.713351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.713377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.713538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.713564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.713725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.355 [2024-07-15 19:19:52.713751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.355 qpair failed and we were unable to recover it. 00:25:12.355 [2024-07-15 19:19:52.713913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.713940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.714109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.714135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.714286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.714312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.714444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.714470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.714651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.714677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.714843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.714869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.715040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.715066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.715231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.715258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.715393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.715419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.715549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.715575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.715710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.715736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.715901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.715928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.716059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.716085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.716227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.716253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.716385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.716411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.716555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.716581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.716748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.716774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.716924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.716951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.717110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.717136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.717298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.717324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.717483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.717510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.717648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.717674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.717838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.717865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.718011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.718037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.718198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.718224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.718397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.718423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.718594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.718619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.718790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.718821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.718960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.718987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.719142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.719167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.719331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.719357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.719520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.719556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.719723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.719760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.719952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.719991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.720179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.720215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.720369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.720403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.720553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.720588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.720775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.720809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.721000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.356 [2024-07-15 19:19:52.721035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f96ac000b90 with addr=10.0.0.2, port=4420 00:25:12.356 qpair failed and we were unable to recover it. 00:25:12.356 [2024-07-15 19:19:52.721226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.721269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.721421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.721448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.721598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.721625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.721800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.721826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.721986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.722014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.722163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.722191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.722325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.722351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.722524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.722552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.722692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.722719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.722906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.722934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.723080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.723118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.723286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.723312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.723468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.723493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.723645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.723683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.723816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.723842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.724011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.724042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.724200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.724226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.724362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.724389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.724530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.724556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.724727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.724754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.724896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.724922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.725062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.725090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.725261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.725287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.725437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.725463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.725657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.725683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.725826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.725852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.726028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.726055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.726221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.726247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.726379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.726405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.726568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.726595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.726736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.726762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.726904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.726931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.727093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.727118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.727283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.727309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.727467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.727493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.727652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.727678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.357 [2024-07-15 19:19:52.727811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.357 [2024-07-15 19:19:52.727838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.357 qpair failed and we were unable to recover it. 00:25:12.623 [2024-07-15 19:19:52.728035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.623 [2024-07-15 19:19:52.728062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.623 qpair failed and we were unable to recover it. 00:25:12.623 [2024-07-15 19:19:52.728237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.623 [2024-07-15 19:19:52.728264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.623 qpair failed and we were unable to recover it. 00:25:12.623 [2024-07-15 19:19:52.728443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.623 [2024-07-15 19:19:52.728468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.623 qpair failed and we were unable to recover it. 00:25:12.623 [2024-07-15 19:19:52.728622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.623 [2024-07-15 19:19:52.728648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.623 qpair failed and we were unable to recover it. 00:25:12.623 [2024-07-15 19:19:52.728822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.728848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.729006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.729032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.729202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.729228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.729392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.729419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.729552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.729578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.729753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.729779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.729914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.729942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.730072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.730098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.730252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.730278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.730422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.730448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.730583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.730609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.730779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.730805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.730964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.730990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.731132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.731157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.731295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.731321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.731461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.731487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.731650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.731676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.731815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.731843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.732022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.732048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.732198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.732225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.732390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.732416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.732611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.732636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.732773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.732799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.732961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.732988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.733171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.733197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.733364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.733390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.733535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.733560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.733705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.733731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.733909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.733936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.734123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.734149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.734332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.734358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.734527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.734554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.734694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 [2024-07-15 19:19:52.734721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 [2024-07-15 19:19:52.734895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.624 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.624 [2024-07-15 19:19:52.734923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.624 qpair failed and we were unable to recover it. 00:25:12.624 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:12.625 [2024-07-15 19:19:52.735053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.735080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:12.625 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:12.625 [2024-07-15 19:19:52.735248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.735279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.625 [2024-07-15 19:19:52.735437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.735465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.735636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.735663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.735804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.735831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.735983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.736011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.736183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.736213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.736391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.736418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.736567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.736603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.736752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.736778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.736953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.736979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.737146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.737182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.737380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.737418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.737558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.737586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.737718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.737745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.737910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.737937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.738109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.738135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.738302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.738328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.738494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.738520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.738653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.738679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.738849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.738899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.739070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.739097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.739267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.739292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.739437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.739464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.739607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.739643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.739825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.739851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.740020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.740047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.740210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.740238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.740396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.740422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.740557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.740583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.740754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.740780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.740971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.740998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.741171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.625 [2024-07-15 19:19:52.741197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.625 qpair failed and we were unable to recover it. 00:25:12.625 [2024-07-15 19:19:52.741390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.741417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.741588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.741615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.741777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.741804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.741977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.742005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.742149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.742177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.742333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.742359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.742540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.742567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.742707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.742733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.742874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.742905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.743095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.743121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.743275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.743303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.743439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.743474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.743649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.743676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.743808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.743834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.743993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.744023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.744185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.744211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.744348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.744375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.744523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.744549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.744719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.744745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.744902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.744929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.745121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.745147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.745300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.745326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.745492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.745519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.745688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.745714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.745864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.745897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.746075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.746101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.746235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.746262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.746431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.746457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.746636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.746662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.746824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.746850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.747036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.747063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.747227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.747254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.747388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.747414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.747576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.747602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.747733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.747759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.747898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.747925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.748069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.748095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.748233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.748261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.748407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.748433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.748594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.748620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.748800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.626 [2024-07-15 19:19:52.748826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.626 qpair failed and we were unable to recover it. 00:25:12.626 [2024-07-15 19:19:52.748980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.749010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.749145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.749171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.749351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.749378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.749551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.749578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.749759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.749785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.749950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.749977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.750113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.750140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.750305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.750332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.750466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.750491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.750628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.750655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.750809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.750835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.751002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.751028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.751200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.751237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.751369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.751396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.751563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.751590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.751752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.751779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.751917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.751945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.752113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.752139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.627 [2024-07-15 19:19:52.752333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.752361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:12.627 [2024-07-15 19:19:52.752499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.752526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.627 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.627 [2024-07-15 19:19:52.752660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.752687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.752845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.752888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.753019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.753045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.753211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.753237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.753406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.753432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.753575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.753601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.753769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.753795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.753960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.753987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.754150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.754188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.754350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.754376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.754512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.754538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.754703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.627 [2024-07-15 19:19:52.754729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.627 qpair failed and we were unable to recover it. 00:25:12.627 [2024-07-15 19:19:52.754898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.754924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.755087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.755113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.755305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.755331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.755465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.755491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.755634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.755660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.755826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.755852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.756002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.756028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.756191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.756224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.756360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.756386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.756532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.756557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.756788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.756814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.757061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.757088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.757287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.757313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.757460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.757486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.757651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.757677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.757827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.757853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.758060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.758086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.758225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.758262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.758406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.758432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.758578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.758604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.758767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.758938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.758965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.759111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.759138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.759312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.759338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.759473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.759499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.759671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.759697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.759849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.759891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.760025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.760051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.760216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.760247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.760395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.628 [2024-07-15 19:19:52.760421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.628 qpair failed and we were unable to recover it. 00:25:12.628 [2024-07-15 19:19:52.760591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.760617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.760790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.760816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.760964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.760991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.761153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.761190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.761349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.761375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.761540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.761566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.761730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.761756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.761906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.761934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.762084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.762110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.762261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.762287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.762482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.762508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.762679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.762705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.762856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.762901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.763039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.763066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.763233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.763259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.763400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.763425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.763569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.763595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.763734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.763760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.763926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.763957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.764225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.764251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.764395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.764421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.764589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.764615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.764757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.764782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.764969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.764995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.765169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.765195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.765336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.765362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.765500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.765526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.765697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.765723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.765892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.765919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.766080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.766105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.766315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.766340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.766497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.766522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.766698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.766724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.766897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.766923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.767063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.767089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.767236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.767262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.767407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.767433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.629 qpair failed and we were unable to recover it. 00:25:12.629 [2024-07-15 19:19:52.767588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.629 [2024-07-15 19:19:52.767614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.767845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.767886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.768063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.768089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.768232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.768258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.768401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.768427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.768576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.768602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.768742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.768768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.768951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.768977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.769122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.769153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.769333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.769358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.769507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.769533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.769707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.769735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.769903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.769930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.770064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.770090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.770274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.770300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.770441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.770467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.770620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.770646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.770796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.770822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.770991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.771018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.771167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.771193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.771329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.771355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.771526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.771552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.771720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.771747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.771906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.771933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.772213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.772239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.772434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.772459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.772622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.772648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.772807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.772833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.773024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.773051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.773195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.773222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.773361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.773387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.773526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.773552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.773696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.773722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.773858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.773891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.774054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.630 [2024-07-15 19:19:52.774080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.630 qpair failed and we were unable to recover it. 00:25:12.630 [2024-07-15 19:19:52.774260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.774286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.774457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.774483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.774643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.774669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.774834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.774860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.775062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.775088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.775243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.775269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.775428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.775454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.775584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.775610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.775771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.775797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 Malloc0 00:25:12.631 [2024-07-15 19:19:52.775960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.775988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.776130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.776157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.631 [2024-07-15 19:19:52.776324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.776351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:12.631 [2024-07-15 19:19:52.776508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.776535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.631 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.631 [2024-07-15 19:19:52.776703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.776731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.776886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.776912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.777073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.777100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.777244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.777270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.777431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.777457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.777622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.777648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.777793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.777819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.777971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.777998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.778134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.778160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.778299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.778324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.778478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.778503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.778683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.778709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.778880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.778907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.779043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.779072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.779249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.779275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.779409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.779434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.779593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.779619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.779675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.631 [2024-07-15 19:19:52.779760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.779784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.779935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.779961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.780113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.780140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.780278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.780304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.780472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.780497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.780664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.780690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.780869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.780899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.781039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.781065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.781200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.631 [2024-07-15 19:19:52.781227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.631 qpair failed and we were unable to recover it. 00:25:12.631 [2024-07-15 19:19:52.781363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.781394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.781537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.781563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.781722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.781748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.781901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.781927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.782093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.782119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.782294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.782320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.782482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.782508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.782661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.782687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.782854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.782899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.783046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.783072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.783215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.783248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.783396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.783422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.783594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.783620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.783790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.783815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.783988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.784015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.784151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.784184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.784327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.784353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.784535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.784561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.784721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.784746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.784893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.784942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.785079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.785105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.785286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.785312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.785505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.785531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.785697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.785722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.785891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.785917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.786092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.786118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.786257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.786283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.786419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.786445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.786590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.786617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.786776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.786802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.786976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.787002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.787170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.787196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.632 [2024-07-15 19:19:52.787359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.632 [2024-07-15 19:19:52.787385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.632 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.787541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.787567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.787704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.787730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.633 [2024-07-15 19:19:52.787928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.787955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.633 [2024-07-15 19:19:52.788101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.788128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.633 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.633 [2024-07-15 19:19:52.788303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.788330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.788479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.788506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.788664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.788693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.788843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.788883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.789058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.789085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.789221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.789247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.789423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.789449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.789611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.789637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.789808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.789835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.789976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.790002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.790192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.790218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.790372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.790398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.790541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.790567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.790723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.790749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.790913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.790946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.791116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.791142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.791279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.791305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.791485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.791511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.791689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.791715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.791883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.791919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.792085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.792111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.792251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.792277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.792409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.792435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.633 [2024-07-15 19:19:52.792583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.633 [2024-07-15 19:19:52.792609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.633 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.792780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.792805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.792972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.792999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.793151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.793177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.793332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.793358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.793521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.793547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.793684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.793710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.793898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.793924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.794121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.794147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.794281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.794307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.794464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.794490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.794628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.794654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.794824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.794851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.795023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.795050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.795210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.795236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.795380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.795406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.795537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.795563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.795732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.795759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.795899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.634 [2024-07-15 19:19:52.795925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.634 [2024-07-15 19:19:52.796073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.796106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.634 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.634 [2024-07-15 19:19:52.796279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.796306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.796498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.796525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.796674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.796700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.796871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.796902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.797035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.797061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.797205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.797231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.797382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.797408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.797571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.797597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.797735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.797761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.797929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.797955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.798119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.798144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.798312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.798338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.798500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.798530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.798663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.798688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.798823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.798849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.799048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.799074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.799216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.799242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.799376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.799402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.799586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.799612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.799770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.634 [2024-07-15 19:19:52.799796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.634 qpair failed and we were unable to recover it. 00:25:12.634 [2024-07-15 19:19:52.799970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.799997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.800168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.800194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.800332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.800358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.800503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.800528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.800682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.800707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.800853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.800895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.801055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.801081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.801244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.801270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.801434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.801460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.801624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.801650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.801808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.801834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.801985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.802013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.802187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.802213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.802375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.802401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.802557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.802583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.802724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.802750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.802922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.802949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.803116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.803142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.803305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.803331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.803465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.803495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.803663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.803689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.803851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.803881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.635 [2024-07-15 19:19:52.804022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.804049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.635 [2024-07-15 19:19:52.804209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.635 [2024-07-15 19:19:52.804236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.635 [2024-07-15 19:19:52.804368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.804395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.804547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.804573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.804766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.804792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.804967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.804993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.805154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.805180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.805313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.635 [2024-07-15 19:19:52.805339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.635 qpair failed and we were unable to recover it. 00:25:12.635 [2024-07-15 19:19:52.805491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.805517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.805685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.805716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.805853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.805884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.806039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.806065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.806255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.806281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.806469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.806495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.806635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.806661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.806823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.806849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.807035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.807061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.807228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.807254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.807397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.807423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.807585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.807611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.807779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.636 [2024-07-15 19:19:52.807806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12200 with addr=10.0.0.2, port=4420 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.807907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.636 [2024-07-15 19:19:52.810365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.810551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.810580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.810601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.810616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.810651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.636 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:12.636 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.636 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.636 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.636 19:19:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3412229 00:25:12.636 [2024-07-15 19:19:52.820259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.820407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.820435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.820450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.820464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.820493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.830303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.830451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.830478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.830493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.830506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.830536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.840326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.840481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.840507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.840522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.840536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.840581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.850336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.850490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.850517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.850532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.850546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.850591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.860289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.860432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.860459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.860474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.860488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.860516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.870339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.870489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.870515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.870531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.870544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.870573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.880352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.880497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.880524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.880539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.880552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.880582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.890376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.636 [2024-07-15 19:19:52.890526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.636 [2024-07-15 19:19:52.890553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.636 [2024-07-15 19:19:52.890569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.636 [2024-07-15 19:19:52.890588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.636 [2024-07-15 19:19:52.890618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.636 qpair failed and we were unable to recover it. 00:25:12.636 [2024-07-15 19:19:52.900411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.900550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.900577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.900593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.900607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.900650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.910465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.910608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.910635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.910650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.910663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.910706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.920452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.920607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.920634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.920650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.920663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.920693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.930515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.930659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.930686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.930702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.930716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.930760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.940518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.940655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.940683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.940698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.940711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.940740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.950636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.950779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.950807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.950822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.950836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.950864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.960607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.960757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.960786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.960803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.960817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.960862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.970666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.970815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.970844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.970859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.970873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.970917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.980639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.980775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.980802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.980823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.980837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.980866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:52.990674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:52.990824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:52.990851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:52.990867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:52.990888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:52.990919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:53.000676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:53.000822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:53.000849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:53.000864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:53.000883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:53.000915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:53.010725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:53.010867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:53.010901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.637 [2024-07-15 19:19:53.010925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.637 [2024-07-15 19:19:53.010939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.637 [2024-07-15 19:19:53.010968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.637 qpair failed and we were unable to recover it. 00:25:12.637 [2024-07-15 19:19:53.020796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.637 [2024-07-15 19:19:53.020950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.637 [2024-07-15 19:19:53.020977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.638 [2024-07-15 19:19:53.020993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.638 [2024-07-15 19:19:53.021006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.638 [2024-07-15 19:19:53.021036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.638 qpair failed and we were unable to recover it. 00:25:12.638 [2024-07-15 19:19:53.030783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.638 [2024-07-15 19:19:53.030925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.638 [2024-07-15 19:19:53.030953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.638 [2024-07-15 19:19:53.030968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.638 [2024-07-15 19:19:53.030981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.638 [2024-07-15 19:19:53.031010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.638 qpair failed and we were unable to recover it. 00:25:12.638 [2024-07-15 19:19:53.040801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.638 [2024-07-15 19:19:53.040956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.638 [2024-07-15 19:19:53.040983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.638 [2024-07-15 19:19:53.040998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.638 [2024-07-15 19:19:53.041011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.638 [2024-07-15 19:19:53.041040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.638 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.050856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.051069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.051097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.051112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.051125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.051154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.060867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.061019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.061045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.061060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.061074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.061103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.070896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.071042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.071068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.071089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.071104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.071133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.080995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.081145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.081171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.081186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.081200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.081228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.090971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.091120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.091146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.091170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.091182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.091212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.101006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.101175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.101202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.101217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.101230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.101258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.111026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.111175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.111203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.111219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.111232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.111260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.121022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.121181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.121207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.121223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.121236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.121265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.131035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.131188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.131215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.131230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.131244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.131273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.141082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.141227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.141254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.141268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.141281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.141309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.151115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.151285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.901 [2024-07-15 19:19:53.151314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.901 [2024-07-15 19:19:53.151332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.901 [2024-07-15 19:19:53.151347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.901 [2024-07-15 19:19:53.151377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.901 qpair failed and we were unable to recover it. 00:25:12.901 [2024-07-15 19:19:53.161139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.901 [2024-07-15 19:19:53.161291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.161318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.161340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.161354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.161384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.171158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.171309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.171336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.171352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.171365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.171395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.181202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.181347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.181374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.181390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.181403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.181432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.191197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.191338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.191364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.191379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.191392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.191421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.201259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.201407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.201434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.201450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.201463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.201493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.211314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.211473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.211501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.211517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.211531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.211574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.221297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.221438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.221465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.221481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.221495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.221525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.231337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.231499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.231528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.231543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.231572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.231600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.241340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.241495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.241522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.241538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.241552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.241580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.251372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.251515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.251547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.251564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.251577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.251605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.261400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.261559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.261585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.261600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.261614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.261643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.271450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.271601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.271629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.271644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.271658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.271701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.281472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.281617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.281644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.281659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.281673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.281701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.291482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.291626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.291653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.291669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.291682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.291711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.902 qpair failed and we were unable to recover it. 00:25:12.902 [2024-07-15 19:19:53.301504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.902 [2024-07-15 19:19:53.301645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.902 [2024-07-15 19:19:53.301672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.902 [2024-07-15 19:19:53.301687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.902 [2024-07-15 19:19:53.301700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.902 [2024-07-15 19:19:53.301729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.903 qpair failed and we were unable to recover it. 00:25:12.903 [2024-07-15 19:19:53.311528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.903 [2024-07-15 19:19:53.311666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.903 [2024-07-15 19:19:53.311693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.903 [2024-07-15 19:19:53.311708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.903 [2024-07-15 19:19:53.311721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.903 [2024-07-15 19:19:53.311750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.903 qpair failed and we were unable to recover it. 00:25:12.903 [2024-07-15 19:19:53.321569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.903 [2024-07-15 19:19:53.321724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.903 [2024-07-15 19:19:53.321751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.903 [2024-07-15 19:19:53.321766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.903 [2024-07-15 19:19:53.321780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:12.903 [2024-07-15 19:19:53.321808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.903 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.331618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.331767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.331794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.331810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.331823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.331851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.341628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.341778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.341810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.341827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.341840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.341869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.351660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.351807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.351835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.351851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.351864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.351899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.361719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.361866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.361899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.361916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.361930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.361958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.371732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.371881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.371909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.371924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.371938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.371968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.381766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.381904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.381931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.381947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.381961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.381995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.391771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.391924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.391953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.391973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.391987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.392016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.401808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.401958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.401985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.402000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.402013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.402042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.411850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.412003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.412029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.412044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.412056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.412084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.421919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.422066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.422094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.422110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.422124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.422152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.431911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.432089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.432122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.432138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.432152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.432195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.441930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.442083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.442113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.442132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.442145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.442175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.165 [2024-07-15 19:19:53.451965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.165 [2024-07-15 19:19:53.452108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.165 [2024-07-15 19:19:53.452135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.165 [2024-07-15 19:19:53.452151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.165 [2024-07-15 19:19:53.452165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.165 [2024-07-15 19:19:53.452193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.165 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.461975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.462117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.462145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.462161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.462174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.462203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.472006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.472156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.472184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.472200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.472213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.472263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.482098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.482272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.482300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.482316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.482329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.482372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.492139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.492305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.492332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.492348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.492361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.492390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.502105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.502258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.502285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.502301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.502314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.502343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.512127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.512277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.512304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.512320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.512333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.512361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.522264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.522416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.522448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.522465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.522478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.522507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.532170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.532336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.532365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.532383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.532399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.532429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.542232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.542386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.542414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.542429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.542443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.542471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.552324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.552462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.552489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.552505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.552518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.552547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.562267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.562412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.562439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.562454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.562468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.562502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.572391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.572543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.572572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.572587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.572601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.572631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.582370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.582529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.582560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.582578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.582606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.582636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.166 [2024-07-15 19:19:53.592363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.166 [2024-07-15 19:19:53.592514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.166 [2024-07-15 19:19:53.592542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.166 [2024-07-15 19:19:53.592558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.166 [2024-07-15 19:19:53.592572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.166 [2024-07-15 19:19:53.592600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.166 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.602385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.602538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.602565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.602580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.602593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.602622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.612626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.612811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.612843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.612859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.612872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.612908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.622529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.622711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.622738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.622754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.622769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.622797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.632511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.632660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.632687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.632702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.632715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.632746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.642542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.642695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.642721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.642736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.642749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.642779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.652540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.652695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.652721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.652737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.652756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.652786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.662556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.662740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.662767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.662782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.662795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.662824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.672577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.672718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.672746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.672761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.672775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.672804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.682614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.682774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.682803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.682818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.682831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.682860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.692644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.692787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.692814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.692829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.692842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.692871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.702679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.702833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.702861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.702882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.702899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.702928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.712799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.712949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.712976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.712991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.713004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.713034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.722774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.427 [2024-07-15 19:19:53.722966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.427 [2024-07-15 19:19:53.722993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.427 [2024-07-15 19:19:53.723008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.427 [2024-07-15 19:19:53.723022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.427 [2024-07-15 19:19:53.723053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.427 qpair failed and we were unable to recover it. 00:25:13.427 [2024-07-15 19:19:53.732773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.732924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.732952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.732968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.732982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.733010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.742781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.742928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.742956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.742971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.742990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.743019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.752838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.752993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.753021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.753037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.753050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.753080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.762856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.763016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.763043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.763059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.763072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.763101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.772873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.773038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.773065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.773080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.773094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.773123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.782901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.783049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.783075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.783090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.783103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.783133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.793036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.793177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.793204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.793219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.793233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.793276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.802990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.803187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.803213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.803229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.803242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.803270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.813011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.813172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.813199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.813214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.813227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.813255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.823024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.823164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.823191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.823206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.823220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.823248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.833183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.833322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.833350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.833365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.833384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.833414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.843106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.843254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.843281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.843297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.843310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.843338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.428 [2024-07-15 19:19:53.853118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.428 [2024-07-15 19:19:53.853262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.428 [2024-07-15 19:19:53.853288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.428 [2024-07-15 19:19:53.853304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.428 [2024-07-15 19:19:53.853317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.428 [2024-07-15 19:19:53.853345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.428 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.863163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.863316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.863343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.863359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.863372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.863401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.873233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.873417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.873444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.873460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.873474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.873503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.883230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.883374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.883401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.883416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.883430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.883460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.893249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.893410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.893437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.893452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.893465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.893494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.903269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.903423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.903450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.903465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.903478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.903507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.913325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.913471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.913498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.913513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.913526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.913569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.923319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.923470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.923496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.923517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.923531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.923561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.933378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.933525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.933551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.933566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.933578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.933608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.943375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.943541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.943567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.943582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.943594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.688 [2024-07-15 19:19:53.943624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.688 qpair failed and we were unable to recover it. 00:25:13.688 [2024-07-15 19:19:53.953441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.688 [2024-07-15 19:19:53.953586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.688 [2024-07-15 19:19:53.953613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.688 [2024-07-15 19:19:53.953632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.688 [2024-07-15 19:19:53.953647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:53.953676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:53.963462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:53.963622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:53.963649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:53.963664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:53.963677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:53.963707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:53.973622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:53.973823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:53.973851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:53.973894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:53.973911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:53.973942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:53.983564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:53.983727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:53.983754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:53.983769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:53.983782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:53.983826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:53.993522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:53.993653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:53.993680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:53.993695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:53.993707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:53.993738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.003607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:54.003778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:54.003805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:54.003820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:54.003832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:54.003862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.013590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:54.013735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:54.013761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:54.013782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:54.013811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:54.013840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.023655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:54.023796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:54.023823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:54.023838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:54.023851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:54.023890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.033649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:54.033786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:54.033812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:54.033827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:54.033841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:54.033870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.043693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:54.043833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:54.043859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:54.043875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:54.043901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:54.043931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.053709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:54.053861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:54.053893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:54.053909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:54.053923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:54.053953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.063767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.689 [2024-07-15 19:19:54.063924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.689 [2024-07-15 19:19:54.063950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.689 [2024-07-15 19:19:54.063965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.689 [2024-07-15 19:19:54.063978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.689 [2024-07-15 19:19:54.064008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.689 qpair failed and we were unable to recover it. 00:25:13.689 [2024-07-15 19:19:54.073763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.690 [2024-07-15 19:19:54.073917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.690 [2024-07-15 19:19:54.073944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.690 [2024-07-15 19:19:54.073959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.690 [2024-07-15 19:19:54.073972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.690 [2024-07-15 19:19:54.074001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.690 qpair failed and we were unable to recover it. 00:25:13.690 [2024-07-15 19:19:54.083793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.690 [2024-07-15 19:19:54.083944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.690 [2024-07-15 19:19:54.083970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.690 [2024-07-15 19:19:54.083985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.690 [2024-07-15 19:19:54.083998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.690 [2024-07-15 19:19:54.084028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.690 qpair failed and we were unable to recover it. 00:25:13.690 [2024-07-15 19:19:54.093810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.690 [2024-07-15 19:19:54.093966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.690 [2024-07-15 19:19:54.093992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.690 [2024-07-15 19:19:54.094007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.690 [2024-07-15 19:19:54.094020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.690 [2024-07-15 19:19:54.094050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.690 qpair failed and we were unable to recover it. 00:25:13.690 [2024-07-15 19:19:54.103856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.690 [2024-07-15 19:19:54.104054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.690 [2024-07-15 19:19:54.104080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.690 [2024-07-15 19:19:54.104101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.690 [2024-07-15 19:19:54.104116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.690 [2024-07-15 19:19:54.104145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.690 qpair failed and we were unable to recover it. 00:25:13.690 [2024-07-15 19:19:54.113866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.690 [2024-07-15 19:19:54.114017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.690 [2024-07-15 19:19:54.114043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.690 [2024-07-15 19:19:54.114058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.690 [2024-07-15 19:19:54.114071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.690 [2024-07-15 19:19:54.114101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.690 qpair failed and we were unable to recover it. 00:25:13.949 [2024-07-15 19:19:54.123914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.949 [2024-07-15 19:19:54.124057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.949 [2024-07-15 19:19:54.124083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.949 [2024-07-15 19:19:54.124099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.949 [2024-07-15 19:19:54.124112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.949 [2024-07-15 19:19:54.124141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.949 qpair failed and we were unable to recover it. 00:25:13.949 [2024-07-15 19:19:54.133961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.949 [2024-07-15 19:19:54.134110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.949 [2024-07-15 19:19:54.134136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.949 [2024-07-15 19:19:54.134151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.949 [2024-07-15 19:19:54.134165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.949 [2024-07-15 19:19:54.134194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.949 qpair failed and we were unable to recover it. 00:25:13.949 [2024-07-15 19:19:54.143965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.949 [2024-07-15 19:19:54.144109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.949 [2024-07-15 19:19:54.144137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.949 [2024-07-15 19:19:54.144152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.949 [2024-07-15 19:19:54.144164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.949 [2024-07-15 19:19:54.144193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.949 qpair failed and we were unable to recover it. 00:25:13.949 [2024-07-15 19:19:54.154025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.949 [2024-07-15 19:19:54.154168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.949 [2024-07-15 19:19:54.154195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.949 [2024-07-15 19:19:54.154210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.949 [2024-07-15 19:19:54.154223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.949 [2024-07-15 19:19:54.154254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.164066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.164277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.164303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.164318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.164330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.164361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.174073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.174234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.174261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.174276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.174290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.174318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.184082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.184260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.184286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.184301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.184315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.184343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.194142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.194290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.194324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.194342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.194357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.194387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.204167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.204316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.204342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.204358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.204372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.204401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.214183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.214330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.214356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.214371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.214386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.214415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.224210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.224355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.224382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.224397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.224411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.224439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.234212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.234352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.234378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.234393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.234405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.234434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.244255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.244399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.244424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.244438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.244451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.244479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.254278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.254418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.254445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.254460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.254475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.254503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.264375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.264548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.264575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.264605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.264618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.264648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.274335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.274474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.274502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.274517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.274529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.274559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.284374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.284519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.284549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.284565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.284579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.284609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.294442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.294638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.294679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.950 [2024-07-15 19:19:54.294694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.950 [2024-07-15 19:19:54.294708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.950 [2024-07-15 19:19:54.294751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.950 qpair failed and we were unable to recover it. 00:25:13.950 [2024-07-15 19:19:54.304427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.950 [2024-07-15 19:19:54.304566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.950 [2024-07-15 19:19:54.304593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.304608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.304621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.304650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:13.951 [2024-07-15 19:19:54.314457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.951 [2024-07-15 19:19:54.314595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.951 [2024-07-15 19:19:54.314621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.314637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.314651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.314679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:13.951 [2024-07-15 19:19:54.324489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.951 [2024-07-15 19:19:54.324648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.951 [2024-07-15 19:19:54.324674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.324689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.324704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.324739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:13.951 [2024-07-15 19:19:54.334527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.951 [2024-07-15 19:19:54.334669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.951 [2024-07-15 19:19:54.334696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.334710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.334723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.334753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:13.951 [2024-07-15 19:19:54.344549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.951 [2024-07-15 19:19:54.344688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.951 [2024-07-15 19:19:54.344714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.344729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.344743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.344772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:13.951 [2024-07-15 19:19:54.354586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.951 [2024-07-15 19:19:54.354722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.951 [2024-07-15 19:19:54.354749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.354764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.354777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.354806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:13.951 [2024-07-15 19:19:54.364601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.951 [2024-07-15 19:19:54.364749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.951 [2024-07-15 19:19:54.364775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.364790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.364803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.364831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:13.951 [2024-07-15 19:19:54.374624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:13.951 [2024-07-15 19:19:54.374767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:13.951 [2024-07-15 19:19:54.374799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:13.951 [2024-07-15 19:19:54.374816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:13.951 [2024-07-15 19:19:54.374830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:13.951 [2024-07-15 19:19:54.374858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:13.951 qpair failed and we were unable to recover it. 00:25:14.210 [2024-07-15 19:19:54.384751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.210 [2024-07-15 19:19:54.384911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.210 [2024-07-15 19:19:54.384937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.210 [2024-07-15 19:19:54.384952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.210 [2024-07-15 19:19:54.384965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.210 [2024-07-15 19:19:54.384996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.210 qpair failed and we were unable to recover it. 00:25:14.210 [2024-07-15 19:19:54.394701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.210 [2024-07-15 19:19:54.394891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.210 [2024-07-15 19:19:54.394917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.210 [2024-07-15 19:19:54.394932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.210 [2024-07-15 19:19:54.394945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.210 [2024-07-15 19:19:54.394976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.210 qpair failed and we were unable to recover it. 00:25:14.210 [2024-07-15 19:19:54.404704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.210 [2024-07-15 19:19:54.404852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.210 [2024-07-15 19:19:54.404884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.210 [2024-07-15 19:19:54.404901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.210 [2024-07-15 19:19:54.404915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.210 [2024-07-15 19:19:54.404944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.210 qpair failed and we were unable to recover it. 00:25:14.210 [2024-07-15 19:19:54.414753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.210 [2024-07-15 19:19:54.414910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.414935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.414950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.414962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.414996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.424753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.424908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.424936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.424951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.424964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.424993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.434778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.434936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.434962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.434977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.434990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.435020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.444846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.445000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.445027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.445042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.445055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.445085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.454882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.455071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.455097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.455112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.455125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.455155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.464867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.465011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.465042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.465058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.465072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.465101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.474920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.475097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.475124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.475140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.475154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.475183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.484940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.485090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.485117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.485132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.485146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.485175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.494950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.495090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.495116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.495131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.495144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.495174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.505039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.505201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.505227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.505243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.505256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.505290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.515052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.515232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.515258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.515273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.515286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.515316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.525184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.525355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.525381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.525397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.525410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.525440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.535066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.535217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.535243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.535258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.535271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.535300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.545138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.545281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.545307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.545322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.545335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.211 [2024-07-15 19:19:54.545365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.211 qpair failed and we were unable to recover it. 00:25:14.211 [2024-07-15 19:19:54.555184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.211 [2024-07-15 19:19:54.555373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.211 [2024-07-15 19:19:54.555404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.211 [2024-07-15 19:19:54.555420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.211 [2024-07-15 19:19:54.555434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.555463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.565184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.565344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.565368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.565383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.565396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.565426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.575294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.575439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.575465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.575480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.575493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.575523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.585210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.585346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.585373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.585388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.585400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.585430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.595347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.595542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.595583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.595598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.595616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.595660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.605333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.605487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.605513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.605529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.605542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.605585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.615288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.615434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.615459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.615475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.615487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.615517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.625329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.625464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.625491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.625506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.625519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.625548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.212 [2024-07-15 19:19:54.635393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.212 [2024-07-15 19:19:54.635572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.212 [2024-07-15 19:19:54.635598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.212 [2024-07-15 19:19:54.635613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.212 [2024-07-15 19:19:54.635626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.212 [2024-07-15 19:19:54.635655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.212 qpair failed and we were unable to recover it. 00:25:14.479 [2024-07-15 19:19:54.645389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.479 [2024-07-15 19:19:54.645544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.479 [2024-07-15 19:19:54.645570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.645585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.645598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.645628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.655440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.655587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.655613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.655628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.655642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.655671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.665440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.665584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.665610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.665625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.665638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.665667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.675504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.675662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.675689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.675704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.675717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.675762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.685515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.685664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.685690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.685706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.685725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.685755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.695517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.695664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.695690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.695704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.695717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.695747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.705574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.705771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.705799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.705814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.705827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.705857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.715612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.715749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.715776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.715792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.715804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.715834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.725619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.725763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.725790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.725805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.725818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.725848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.735650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.735828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.735855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.735871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.735892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.735924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.745677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.745840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.745868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.745894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.745910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.745939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.755691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.755830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.755857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.755872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.755892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.755923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.765723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.765870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.765904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.765920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.765933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.765963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.775753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.775907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.775934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.775950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.480 [2024-07-15 19:19:54.775969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.480 [2024-07-15 19:19:54.775999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.480 qpair failed and we were unable to recover it. 00:25:14.480 [2024-07-15 19:19:54.785773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.480 [2024-07-15 19:19:54.785925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.480 [2024-07-15 19:19:54.785951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.480 [2024-07-15 19:19:54.785966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.785980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.481 [2024-07-15 19:19:54.786009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.795813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.795966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.795992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.796007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.796021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.481 [2024-07-15 19:19:54.796050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.805868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.806028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.806054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.806069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.806082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.481 [2024-07-15 19:19:54.806111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.815892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.816077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.816103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.816118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.816132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc12200 00:25:14.481 [2024-07-15 19:19:54.816161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.825912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.826077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.826111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.826127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.826141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.826188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.835943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.836117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.836145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.836169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.836184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.836215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.846011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.846169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.846197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.846212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.846226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.846257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.856001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.856177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.856204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.856220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.856234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.856264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.866037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.866182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.866209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.866234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.866249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.866295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.876069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.876218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.876246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.876262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.876276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.876307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.886096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.886247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.886274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.886289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.886303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.886333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.896137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.896299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.896326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.896341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.896354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.896399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.481 [2024-07-15 19:19:54.906159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.481 [2024-07-15 19:19:54.906313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.481 [2024-07-15 19:19:54.906340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.481 [2024-07-15 19:19:54.906356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.481 [2024-07-15 19:19:54.906369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.481 [2024-07-15 19:19:54.906399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.481 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.916137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.916297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.916323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.916338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.916351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.916381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.926181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.926325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.926352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.926367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.926380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.926411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.936190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.936330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.936357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.936372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.936385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.936414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.946226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.946417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.946444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.946459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.946472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.946503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.956285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.956423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.956455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.956472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.956485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.956516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.966296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.966441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.966466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.966483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.966497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.966528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.976329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.976478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.976506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.976521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.976534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.976566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.986367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.986506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.986533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.986548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.986561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.986591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:54.996429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:54.996573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:54.996600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:54.996615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:54.996628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:54.996673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:55.006407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:55.006584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:55.006611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:55.006627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:55.006640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:55.006670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:55.016462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:55.016613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:55.016640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:55.016655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:55.016669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:55.016698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:55.026468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.743 [2024-07-15 19:19:55.026609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.743 [2024-07-15 19:19:55.026636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.743 [2024-07-15 19:19:55.026652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.743 [2024-07-15 19:19:55.026666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.743 [2024-07-15 19:19:55.026722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.743 qpair failed and we were unable to recover it. 00:25:14.743 [2024-07-15 19:19:55.036511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.036684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.036725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.036741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.036753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.036797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.046497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.046641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.046673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.046690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.046703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.046733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.056564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.056705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.056732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.056746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.056760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.056804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.066575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.066723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.066750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.066765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.066777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.066822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.076579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.076734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.076761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.076776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.076789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.076820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.086623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.086769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.086796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.086812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.086826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.086861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.096651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.096794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.096822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.096838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.096851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.096890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.106667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.106805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.106832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.106848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.106861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.106900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.116724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.116884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.116911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.116926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.116940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.116970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.126747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.126913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.126940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.126957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.126971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.127000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.136839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.137013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.137048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.137064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.137078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.137108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.146802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.146946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.146973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.744 [2024-07-15 19:19:55.146996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.744 [2024-07-15 19:19:55.147010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.744 [2024-07-15 19:19:55.147039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.744 qpair failed and we were unable to recover it. 00:25:14.744 [2024-07-15 19:19:55.156831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.744 [2024-07-15 19:19:55.156999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.744 [2024-07-15 19:19:55.157028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.745 [2024-07-15 19:19:55.157044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.745 [2024-07-15 19:19:55.157057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.745 [2024-07-15 19:19:55.157089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.745 qpair failed and we were unable to recover it. 00:25:14.745 [2024-07-15 19:19:55.166867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:14.745 [2024-07-15 19:19:55.167026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:14.745 [2024-07-15 19:19:55.167054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:14.745 [2024-07-15 19:19:55.167069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:14.745 [2024-07-15 19:19:55.167082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:14.745 [2024-07-15 19:19:55.167113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:14.745 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.176888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.177092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.177118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.177134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.177153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.177184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.186903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.187041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.187067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.187084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.187097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.187127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.196934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.197079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.197106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.197121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.197134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.197164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.206975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.207172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.207198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.207214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.207226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.207256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.216990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.217135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.217162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.217178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.217192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.217222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.227037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.227181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.227209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.227224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.227237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.227266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.237068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.237205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.237233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.237248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.237262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.237307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.247084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.247230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.247256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.247272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.247285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.247315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.257105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.006 [2024-07-15 19:19:55.257265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.006 [2024-07-15 19:19:55.257292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.006 [2024-07-15 19:19:55.257307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.006 [2024-07-15 19:19:55.257320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.006 [2024-07-15 19:19:55.257349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.006 qpair failed and we were unable to recover it. 00:25:15.006 [2024-07-15 19:19:55.267191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.267384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.267410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.267430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.267443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.267475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.277170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.277324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.277351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.277366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.277379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.277424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.287203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.287353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.287379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.287394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.287407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.287437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.297214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.297364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.297390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.297405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.297418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.297448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.307252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.307401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.307427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.307443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.307456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.307485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.317318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.317524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.317567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.317585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.317599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.317645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.327330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.327494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.327521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.327537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.327566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.327596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.337333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.337477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.337503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.337518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.337532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.337563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.347353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.347496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.347523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.347539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.347552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.347583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.357418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.357562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.357588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.357609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.357638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.357668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.367468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.367616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.367642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.367657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.367670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.367713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.377460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.377608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.377634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.377648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.377661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.377708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.387567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.387718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.387752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.387767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.007 [2024-07-15 19:19:55.387795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.007 [2024-07-15 19:19:55.387834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.007 qpair failed and we were unable to recover it. 00:25:15.007 [2024-07-15 19:19:55.397531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.007 [2024-07-15 19:19:55.397710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.007 [2024-07-15 19:19:55.397736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.007 [2024-07-15 19:19:55.397752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.008 [2024-07-15 19:19:55.397765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.008 [2024-07-15 19:19:55.397795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.008 qpair failed and we were unable to recover it. 00:25:15.008 [2024-07-15 19:19:55.407526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.008 [2024-07-15 19:19:55.407671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.008 [2024-07-15 19:19:55.407698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.008 [2024-07-15 19:19:55.407713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.008 [2024-07-15 19:19:55.407726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.008 [2024-07-15 19:19:55.407757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.008 qpair failed and we were unable to recover it. 00:25:15.008 [2024-07-15 19:19:55.417550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.008 [2024-07-15 19:19:55.417730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.008 [2024-07-15 19:19:55.417755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.008 [2024-07-15 19:19:55.417786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.008 [2024-07-15 19:19:55.417799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.008 [2024-07-15 19:19:55.417827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.008 qpair failed and we were unable to recover it. 00:25:15.008 [2024-07-15 19:19:55.427601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.008 [2024-07-15 19:19:55.427795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.008 [2024-07-15 19:19:55.427822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.008 [2024-07-15 19:19:55.427852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.008 [2024-07-15 19:19:55.427866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.008 [2024-07-15 19:19:55.427928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.008 qpair failed and we were unable to recover it. 00:25:15.269 [2024-07-15 19:19:55.437630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.269 [2024-07-15 19:19:55.437769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.269 [2024-07-15 19:19:55.437796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.269 [2024-07-15 19:19:55.437811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.269 [2024-07-15 19:19:55.437825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.269 [2024-07-15 19:19:55.437870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.269 qpair failed and we were unable to recover it. 00:25:15.269 [2024-07-15 19:19:55.447676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.269 [2024-07-15 19:19:55.447840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.269 [2024-07-15 19:19:55.447871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.269 [2024-07-15 19:19:55.447896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.269 [2024-07-15 19:19:55.447910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.269 [2024-07-15 19:19:55.447941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.269 qpair failed and we were unable to recover it. 00:25:15.269 [2024-07-15 19:19:55.457684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.269 [2024-07-15 19:19:55.457889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.269 [2024-07-15 19:19:55.457917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.269 [2024-07-15 19:19:55.457932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.269 [2024-07-15 19:19:55.457946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.269 [2024-07-15 19:19:55.457976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.269 qpair failed and we were unable to recover it. 00:25:15.269 [2024-07-15 19:19:55.467705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.269 [2024-07-15 19:19:55.467851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.269 [2024-07-15 19:19:55.467886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.269 [2024-07-15 19:19:55.467903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.269 [2024-07-15 19:19:55.467918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.269 [2024-07-15 19:19:55.467947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.269 qpair failed and we were unable to recover it. 00:25:15.269 [2024-07-15 19:19:55.477722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.269 [2024-07-15 19:19:55.477862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.269 [2024-07-15 19:19:55.477894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.269 [2024-07-15 19:19:55.477911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.269 [2024-07-15 19:19:55.477924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.269 [2024-07-15 19:19:55.477954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.269 qpair failed and we were unable to recover it. 00:25:15.269 [2024-07-15 19:19:55.487748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.487903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.487930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.487946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.487959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.487996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.497816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.497964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.497991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.498007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.498020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.498051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.507862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.508015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.508042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.508058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.508071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.508102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.517822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.517968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.517996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.518012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.518026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.518056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.527889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.528041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.528068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.528084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.528097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.528128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.537919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.538065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.538097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.538113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.538127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.538158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.547915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.548056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.548083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.548098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.548112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.548143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.557929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.558104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.558131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.558147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.558160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.558191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.567995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.568144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.568171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.568186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.568200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.568245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.578015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.578153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.578179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.578194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.578213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.578244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.588039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.588181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.588208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.588223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.588236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.588266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.598173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.598321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.598348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.598364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.598377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.598423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.608137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.608366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.608395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.608410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.608424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.608468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.618140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.618289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.270 [2024-07-15 19:19:55.618317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.270 [2024-07-15 19:19:55.618333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.270 [2024-07-15 19:19:55.618346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.270 [2024-07-15 19:19:55.618392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.270 qpair failed and we were unable to recover it. 00:25:15.270 [2024-07-15 19:19:55.628243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.270 [2024-07-15 19:19:55.628393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.628421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.628440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.628455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.628500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.271 [2024-07-15 19:19:55.638161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.271 [2024-07-15 19:19:55.638304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.638332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.638348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.638361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.638391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.271 [2024-07-15 19:19:55.648215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.271 [2024-07-15 19:19:55.648367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.648394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.648409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.648438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.648467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.271 [2024-07-15 19:19:55.658238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.271 [2024-07-15 19:19:55.658385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.658412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.658427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.658441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.658470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.271 [2024-07-15 19:19:55.668306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.271 [2024-07-15 19:19:55.668455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.668482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.668497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.668516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.668546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.271 [2024-07-15 19:19:55.678342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.271 [2024-07-15 19:19:55.678503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.678532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.678548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.678575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.678605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.271 [2024-07-15 19:19:55.688351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.271 [2024-07-15 19:19:55.688497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.688523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.688539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.688552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.688596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.271 [2024-07-15 19:19:55.698348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.271 [2024-07-15 19:19:55.698494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.271 [2024-07-15 19:19:55.698521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.271 [2024-07-15 19:19:55.698537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.271 [2024-07-15 19:19:55.698550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.271 [2024-07-15 19:19:55.698579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.271 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.708412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.708584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.708611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.708627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.708641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.708671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.718418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.718575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.718602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.718618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.718631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.718661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.728475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.728621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.728648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.728663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.728676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.728706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.738479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.738622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.738650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.738665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.738678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.738723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.748518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.748665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.748693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.748709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.748722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.748768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.758549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.758695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.758722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.758743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.758757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.758803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.768574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.768720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.768748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.768764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.768777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.768808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.778591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.778739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.778766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.778782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.778796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.778826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.788707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.788862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.788896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.788927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.788941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.788972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.798661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.798798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.798824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.798840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.798853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.798905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.808701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.530 [2024-07-15 19:19:55.808846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.530 [2024-07-15 19:19:55.808873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.530 [2024-07-15 19:19:55.808896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.530 [2024-07-15 19:19:55.808910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.530 [2024-07-15 19:19:55.808940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.530 qpair failed and we were unable to recover it. 00:25:15.530 [2024-07-15 19:19:55.818703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.818845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.818872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.818894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.818908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.818938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.828728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.828869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.828904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.828921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.828935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.828964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.838782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.838928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.838955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.838970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.838982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.839013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.848794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.848969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.849002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.849018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.849031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.849062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.858809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.858962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.858989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.859004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.859017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.859047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.868863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.869016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.869042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.869058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.869071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.869101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.878920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.879101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.879129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.879144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.879157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.879188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.888933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.889087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.889112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.889133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.889146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.889181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.898943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.899082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.899107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.899122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.899135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.899166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.908969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.909110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.909136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.909151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.909164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.909195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.919007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.919162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.919188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.919203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.919216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.919247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.929150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.929311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.929337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.929351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.929364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.929409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.939084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.939236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.939269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.939287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.939317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.531 [2024-07-15 19:19:55.939349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.531 qpair failed and we were unable to recover it. 00:25:15.531 [2024-07-15 19:19:55.949119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.531 [2024-07-15 19:19:55.949280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.531 [2024-07-15 19:19:55.949306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.531 [2024-07-15 19:19:55.949321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.531 [2024-07-15 19:19:55.949336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.532 [2024-07-15 19:19:55.949381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.532 qpair failed and we were unable to recover it. 00:25:15.532 [2024-07-15 19:19:55.959132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.532 [2024-07-15 19:19:55.959283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.532 [2024-07-15 19:19:55.959307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.532 [2024-07-15 19:19:55.959330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.532 [2024-07-15 19:19:55.959344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.532 [2024-07-15 19:19:55.959373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.532 qpair failed and we were unable to recover it. 00:25:15.790 [2024-07-15 19:19:55.969263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.790 [2024-07-15 19:19:55.969420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.790 [2024-07-15 19:19:55.969446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.790 [2024-07-15 19:19:55.969460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.790 [2024-07-15 19:19:55.969489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.790 [2024-07-15 19:19:55.969519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.790 qpair failed and we were unable to recover it. 00:25:15.790 [2024-07-15 19:19:55.979164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.790 [2024-07-15 19:19:55.979323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.790 [2024-07-15 19:19:55.979349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.790 [2024-07-15 19:19:55.979365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.790 [2024-07-15 19:19:55.979384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.790 [2024-07-15 19:19:55.979415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.790 qpair failed and we were unable to recover it. 00:25:15.790 [2024-07-15 19:19:55.989193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.790 [2024-07-15 19:19:55.989336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.790 [2024-07-15 19:19:55.989362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.790 [2024-07-15 19:19:55.989377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.790 [2024-07-15 19:19:55.989391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.790 [2024-07-15 19:19:55.989421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.790 qpair failed and we were unable to recover it. 00:25:15.790 [2024-07-15 19:19:55.999303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.790 [2024-07-15 19:19:55.999455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.790 [2024-07-15 19:19:55.999481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.790 [2024-07-15 19:19:55.999496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.790 [2024-07-15 19:19:55.999510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.790 [2024-07-15 19:19:55.999556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.790 qpair failed and we were unable to recover it. 00:25:15.790 [2024-07-15 19:19:56.009266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.790 [2024-07-15 19:19:56.009421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.790 [2024-07-15 19:19:56.009448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.790 [2024-07-15 19:19:56.009463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.790 [2024-07-15 19:19:56.009477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.790 [2024-07-15 19:19:56.009507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.790 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.019289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.019461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.019486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.019501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.019515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.019544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.029379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.029549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.029576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.029606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.029620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.029649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.039463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.039629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.039654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.039668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.039681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.039725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.049416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.049584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.049611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.049625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.049654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.049683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.059387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.059537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.059564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.059579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.059592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.059621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.069464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.069612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.069638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.069653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.069673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.069705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.079463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.079615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.079641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.079655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.079669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.079698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.089484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.089641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.089667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.089682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.089696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.089726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.099576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.099741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.099768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.099783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.099796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.099841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.109541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.109688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.109714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.109729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.109742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.109787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.119561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.119704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.119730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.119745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.119759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.119788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.129619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.129765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.129792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.129807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.129821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.129863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.139706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.139853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.139886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.139903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.139917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.139947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.149636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.791 [2024-07-15 19:19:56.149781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.791 [2024-07-15 19:19:56.149807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.791 [2024-07-15 19:19:56.149822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.791 [2024-07-15 19:19:56.149836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.791 [2024-07-15 19:19:56.149867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.791 qpair failed and we were unable to recover it. 00:25:15.791 [2024-07-15 19:19:56.159670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.792 [2024-07-15 19:19:56.159815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.792 [2024-07-15 19:19:56.159841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.792 [2024-07-15 19:19:56.159862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.792 [2024-07-15 19:19:56.159883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.792 [2024-07-15 19:19:56.159916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.792 qpair failed and we were unable to recover it. 00:25:15.792 [2024-07-15 19:19:56.169698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.792 [2024-07-15 19:19:56.169853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.792 [2024-07-15 19:19:56.169887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.792 [2024-07-15 19:19:56.169904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.792 [2024-07-15 19:19:56.169918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.792 [2024-07-15 19:19:56.169948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.792 qpair failed and we were unable to recover it. 00:25:15.792 [2024-07-15 19:19:56.179731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.792 [2024-07-15 19:19:56.179937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.792 [2024-07-15 19:19:56.179963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.792 [2024-07-15 19:19:56.179979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.792 [2024-07-15 19:19:56.179996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.792 [2024-07-15 19:19:56.180026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.792 qpair failed and we were unable to recover it. 00:25:15.792 [2024-07-15 19:19:56.189752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.792 [2024-07-15 19:19:56.189899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.792 [2024-07-15 19:19:56.189926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.792 [2024-07-15 19:19:56.189944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.792 [2024-07-15 19:19:56.189956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.792 [2024-07-15 19:19:56.189986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.792 qpair failed and we were unable to recover it. 00:25:15.792 [2024-07-15 19:19:56.199778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.792 [2024-07-15 19:19:56.199930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.792 [2024-07-15 19:19:56.199956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.792 [2024-07-15 19:19:56.199971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.792 [2024-07-15 19:19:56.199985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.792 [2024-07-15 19:19:56.200015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.792 qpair failed and we were unable to recover it. 00:25:15.792 [2024-07-15 19:19:56.209839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.792 [2024-07-15 19:19:56.209991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.792 [2024-07-15 19:19:56.210017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.792 [2024-07-15 19:19:56.210032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.792 [2024-07-15 19:19:56.210046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.792 [2024-07-15 19:19:56.210076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.792 qpair failed and we were unable to recover it. 00:25:15.792 [2024-07-15 19:19:56.219847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:15.792 [2024-07-15 19:19:56.220013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:15.792 [2024-07-15 19:19:56.220039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:15.792 [2024-07-15 19:19:56.220054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:15.792 [2024-07-15 19:19:56.220067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:15.792 [2024-07-15 19:19:56.220109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:15.792 qpair failed and we were unable to recover it. 00:25:16.050 [2024-07-15 19:19:56.229870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.050 [2024-07-15 19:19:56.230034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.050 [2024-07-15 19:19:56.230061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.050 [2024-07-15 19:19:56.230076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.050 [2024-07-15 19:19:56.230090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.050 [2024-07-15 19:19:56.230122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.050 qpair failed and we were unable to recover it. 00:25:16.050 [2024-07-15 19:19:56.239913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.050 [2024-07-15 19:19:56.240095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.050 [2024-07-15 19:19:56.240121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.050 [2024-07-15 19:19:56.240136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.050 [2024-07-15 19:19:56.240150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.050 [2024-07-15 19:19:56.240196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.050 qpair failed and we were unable to recover it. 00:25:16.050 [2024-07-15 19:19:56.249967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.050 [2024-07-15 19:19:56.250130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.050 [2024-07-15 19:19:56.250162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.050 [2024-07-15 19:19:56.250178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.050 [2024-07-15 19:19:56.250192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.050 [2024-07-15 19:19:56.250238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.050 qpair failed and we were unable to recover it. 00:25:16.050 [2024-07-15 19:19:56.259953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.050 [2024-07-15 19:19:56.260105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.260130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.260145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.260159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.260190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.270016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.270164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.270191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.270211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.270225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.270271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.280029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.280175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.280201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.280217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.280231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.280277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.290064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.290218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.290243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.290259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.290272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.290308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.300119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.300285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.300310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.300326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.300340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.300369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.310105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.310249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.310275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.310290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.310304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.310334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.320176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.320339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.320368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.320383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.320412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.320443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.330177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.330323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.330349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.330364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.330378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.330407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.340220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.340369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.340400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.340416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.340430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.340459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.350208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.350372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.350398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.350413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.350444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.350475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.360340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.360490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.360516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.360531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.360545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.360590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.370288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.370447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.370473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.370488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.370501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.370530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.380297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.380444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.380469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.380484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.380498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.380535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.390449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.390609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.390634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.390649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.390662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.051 [2024-07-15 19:19:56.390704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.051 qpair failed and we were unable to recover it. 00:25:16.051 [2024-07-15 19:19:56.400388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.051 [2024-07-15 19:19:56.400531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.051 [2024-07-15 19:19:56.400557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.051 [2024-07-15 19:19:56.400572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.051 [2024-07-15 19:19:56.400586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.400616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.410449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.410605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.410634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.410649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.410678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.410707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.420513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.420664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.420690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.420705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.420717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.420763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.430473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.430619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.430646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.430662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.430677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.430723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.440467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.440615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.440647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.440662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.440676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.440712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.450568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.450722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.450749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.450764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.450793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.450822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.460526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.460678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.460708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.460723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.460737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.460766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.470569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.470713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.470739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.470754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.470774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.470832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.052 [2024-07-15 19:19:56.480637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.052 [2024-07-15 19:19:56.480798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.052 [2024-07-15 19:19:56.480824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.052 [2024-07-15 19:19:56.480839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.052 [2024-07-15 19:19:56.480853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.052 [2024-07-15 19:19:56.480891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.052 qpair failed and we were unable to recover it. 00:25:16.313 [2024-07-15 19:19:56.490650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.313 [2024-07-15 19:19:56.490803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.313 [2024-07-15 19:19:56.490830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.313 [2024-07-15 19:19:56.490845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.313 [2024-07-15 19:19:56.490860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.313 [2024-07-15 19:19:56.490905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.313 qpair failed and we were unable to recover it. 00:25:16.313 [2024-07-15 19:19:56.500642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.313 [2024-07-15 19:19:56.500800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.313 [2024-07-15 19:19:56.500827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.313 [2024-07-15 19:19:56.500842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.313 [2024-07-15 19:19:56.500856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.313 [2024-07-15 19:19:56.500900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.313 qpair failed and we were unable to recover it. 00:25:16.313 [2024-07-15 19:19:56.510687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.313 [2024-07-15 19:19:56.510873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.313 [2024-07-15 19:19:56.510907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.313 [2024-07-15 19:19:56.510922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.313 [2024-07-15 19:19:56.510937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.313 [2024-07-15 19:19:56.510967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.313 qpair failed and we were unable to recover it. 00:25:16.313 [2024-07-15 19:19:56.520717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.313 [2024-07-15 19:19:56.520900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.313 [2024-07-15 19:19:56.520926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.313 [2024-07-15 19:19:56.520942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.313 [2024-07-15 19:19:56.520955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.313 [2024-07-15 19:19:56.520986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.313 qpair failed and we were unable to recover it. 00:25:16.313 [2024-07-15 19:19:56.530752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.313 [2024-07-15 19:19:56.530964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.313 [2024-07-15 19:19:56.530991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.313 [2024-07-15 19:19:56.531006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.313 [2024-07-15 19:19:56.531023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.313 [2024-07-15 19:19:56.531055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.313 qpair failed and we were unable to recover it. 00:25:16.313 [2024-07-15 19:19:56.540873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.313 [2024-07-15 19:19:56.541073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.313 [2024-07-15 19:19:56.541100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.313 [2024-07-15 19:19:56.541114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.313 [2024-07-15 19:19:56.541128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.541157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.550825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.550987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.551014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.551029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.551043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.551073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.560851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.561049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.561076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.561101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.561115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.561147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.570857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.571021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.571051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.571066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.571080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.571110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.580867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.581064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.581090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.581105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.581119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.581149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.590910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.591055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.591081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.591096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.591109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.591139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.600933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.601085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.601111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.601125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.601139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.601183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.611040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.611226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.611251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.611282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.611296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.611339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.621080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.621228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.621253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.621269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.621283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.621328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.631025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.631172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.631198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.631213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.631227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.631271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.641041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.641190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.641215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.641230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.641245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.641274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.651099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.651293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.651324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.651340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.651354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.651383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.661127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.661275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.661301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.661315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.661329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.661359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.314 [2024-07-15 19:19:56.671131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.314 [2024-07-15 19:19:56.671275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.314 [2024-07-15 19:19:56.671300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.314 [2024-07-15 19:19:56.671315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.314 [2024-07-15 19:19:56.671329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.314 [2024-07-15 19:19:56.671358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.314 qpair failed and we were unable to recover it. 00:25:16.315 [2024-07-15 19:19:56.681170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.315 [2024-07-15 19:19:56.681345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.315 [2024-07-15 19:19:56.681371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.315 [2024-07-15 19:19:56.681386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.315 [2024-07-15 19:19:56.681399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.315 [2024-07-15 19:19:56.681441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.315 qpair failed and we were unable to recover it. 00:25:16.315 [2024-07-15 19:19:56.691206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.315 [2024-07-15 19:19:56.691350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.315 [2024-07-15 19:19:56.691377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.315 [2024-07-15 19:19:56.691393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.315 [2024-07-15 19:19:56.691406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.315 [2024-07-15 19:19:56.691435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.315 qpair failed and we were unable to recover it. 00:25:16.315 [2024-07-15 19:19:56.701213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.315 [2024-07-15 19:19:56.701358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.315 [2024-07-15 19:19:56.701384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.315 [2024-07-15 19:19:56.701400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.315 [2024-07-15 19:19:56.701413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.315 [2024-07-15 19:19:56.701442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.315 qpair failed and we were unable to recover it. 00:25:16.315 [2024-07-15 19:19:56.711236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.315 [2024-07-15 19:19:56.711384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.315 [2024-07-15 19:19:56.711410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.315 [2024-07-15 19:19:56.711426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.315 [2024-07-15 19:19:56.711439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.315 [2024-07-15 19:19:56.711469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.315 qpair failed and we were unable to recover it. 00:25:16.315 [2024-07-15 19:19:56.721297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.315 [2024-07-15 19:19:56.721439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.315 [2024-07-15 19:19:56.721465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.315 [2024-07-15 19:19:56.721481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.315 [2024-07-15 19:19:56.721494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.315 [2024-07-15 19:19:56.721523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.315 qpair failed and we were unable to recover it. 00:25:16.315 [2024-07-15 19:19:56.731342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.315 [2024-07-15 19:19:56.731491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.315 [2024-07-15 19:19:56.731518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.315 [2024-07-15 19:19:56.731537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.315 [2024-07-15 19:19:56.731550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.315 [2024-07-15 19:19:56.731596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.315 qpair failed and we were unable to recover it. 00:25:16.315 [2024-07-15 19:19:56.741343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.315 [2024-07-15 19:19:56.741484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.315 [2024-07-15 19:19:56.741517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.315 [2024-07-15 19:19:56.741533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.315 [2024-07-15 19:19:56.741547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.315 [2024-07-15 19:19:56.741577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.315 qpair failed and we were unable to recover it. 00:25:16.576 [2024-07-15 19:19:56.751373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.576 [2024-07-15 19:19:56.751534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.576 [2024-07-15 19:19:56.751561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.576 [2024-07-15 19:19:56.751576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.576 [2024-07-15 19:19:56.751589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.576 [2024-07-15 19:19:56.751619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.576 qpair failed and we were unable to recover it. 00:25:16.576 [2024-07-15 19:19:56.761411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.576 [2024-07-15 19:19:56.761586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.576 [2024-07-15 19:19:56.761613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.576 [2024-07-15 19:19:56.761629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.576 [2024-07-15 19:19:56.761642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.576 [2024-07-15 19:19:56.761671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.576 qpair failed and we were unable to recover it. 00:25:16.576 [2024-07-15 19:19:56.771440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.576 [2024-07-15 19:19:56.771584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.576 [2024-07-15 19:19:56.771610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.576 [2024-07-15 19:19:56.771625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.576 [2024-07-15 19:19:56.771638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.576 [2024-07-15 19:19:56.771668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.576 qpair failed and we were unable to recover it. 00:25:16.576 [2024-07-15 19:19:56.781496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.576 [2024-07-15 19:19:56.781674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.576 [2024-07-15 19:19:56.781701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.576 [2024-07-15 19:19:56.781732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.576 [2024-07-15 19:19:56.781745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.576 [2024-07-15 19:19:56.781795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.576 qpair failed and we were unable to recover it. 00:25:16.576 [2024-07-15 19:19:56.791495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.576 [2024-07-15 19:19:56.791640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.576 [2024-07-15 19:19:56.791667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.576 [2024-07-15 19:19:56.791682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.576 [2024-07-15 19:19:56.791696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.576 [2024-07-15 19:19:56.791740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.576 qpair failed and we were unable to recover it. 00:25:16.576 [2024-07-15 19:19:56.801556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.576 [2024-07-15 19:19:56.801699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.576 [2024-07-15 19:19:56.801726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.576 [2024-07-15 19:19:56.801742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.576 [2024-07-15 19:19:56.801756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.576 [2024-07-15 19:19:56.801785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.576 qpair failed and we were unable to recover it. 00:25:16.576 [2024-07-15 19:19:56.811561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.576 [2024-07-15 19:19:56.811756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.811782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.811797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.811810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.811840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.821574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.821744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.821771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.821786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.821800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.821828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.831633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.831777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.831808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.831824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.831838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.831869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.841619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.841788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.841814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.841830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.841859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.841894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.851653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.851801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.851828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.851843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.851856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.851895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.861677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.861818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.861844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.861859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.861872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.861910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.871696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.871868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.871901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.871918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.871937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.871967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.881729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.881869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.881905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.881920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.881933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.881963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.891763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.891912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.891939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.891954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.891968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.577 [2024-07-15 19:19:56.891998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.577 qpair failed and we were unable to recover it. 00:25:16.577 [2024-07-15 19:19:56.901802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.577 [2024-07-15 19:19:56.901956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.577 [2024-07-15 19:19:56.901983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.577 [2024-07-15 19:19:56.901999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.577 [2024-07-15 19:19:56.902012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.902042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.911804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.911961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.911987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.912003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.912016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.912046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.921841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.922019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.922047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.922063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.922076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.922106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.931897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.932056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.932084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.932114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.932127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.932171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.941917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.942059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.942086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.942105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.942119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.942149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.951948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.952116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.952143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.952159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.952173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.952202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.961971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.962125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.962152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.962173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.962187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.962231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.972032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.972178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.972204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.972219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.972233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.972262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.982056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.982250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.982276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.982306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.982319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.982348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:56.992040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:56.992188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:56.992215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:56.992230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:56.992243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:56.992273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.578 [2024-07-15 19:19:57.002089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.578 [2024-07-15 19:19:57.002243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.578 [2024-07-15 19:19:57.002271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.578 [2024-07-15 19:19:57.002286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.578 [2024-07-15 19:19:57.002299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.578 [2024-07-15 19:19:57.002345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.578 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.012152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.012366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.012391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.012406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.012418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.012447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.022175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.022330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.022357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.022372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.022385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.022415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.032236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.032400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.032427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.032443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.032456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.032486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.042173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.042323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.042349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.042365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.042378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.042408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.052213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.052360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.052386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.052406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.052421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.052451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.062262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.062412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.062438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.062453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.062466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.062497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.072269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.072412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.072437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.072452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.072467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.072497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.082337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.082503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.082530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.082545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.082575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.082605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.092368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.092513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.092540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.092555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.092569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.092612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.102345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.102485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.102512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.102528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.102541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.102570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.112372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.112509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.112536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.112552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.112566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.112595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.122473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.122616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.122642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.122658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.122671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.122717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.132476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.132663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.132689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.841 [2024-07-15 19:19:57.132705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.841 [2024-07-15 19:19:57.132718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.841 [2024-07-15 19:19:57.132749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.841 qpair failed and we were unable to recover it. 00:25:16.841 [2024-07-15 19:19:57.142487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.841 [2024-07-15 19:19:57.142652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.841 [2024-07-15 19:19:57.142684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.142701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.142729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.142759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.152519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.152663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.152689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.152705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.152722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.152769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.162629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.162774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.162800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.162816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.162830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.162860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.172559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.172724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.172751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.172766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.172779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.172808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.182613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.182773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.182800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.182816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.182829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.182865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.192625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.192761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.192788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.192803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.192817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.192847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.202635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.202770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.202796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.202812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.202825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.202855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.212715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.212896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.212933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.212952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.212968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.212999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.222733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.222897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.222924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.222939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.222952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.222983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.232729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.232939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.232971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.232988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.233002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.233032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.242789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.242966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.242994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.243010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.243022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.243052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.252781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.252938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.252964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.252980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.252994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.253023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:16.842 [2024-07-15 19:19:57.262814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:16.842 [2024-07-15 19:19:57.262957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:16.842 [2024-07-15 19:19:57.262983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:16.842 [2024-07-15 19:19:57.262998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:16.842 [2024-07-15 19:19:57.263012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:16.842 [2024-07-15 19:19:57.263042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:16.842 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.272945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.273091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.273120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.273140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.273163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.273196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.282871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.283021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.283048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.283063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.283076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.283106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.292917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.293091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.293119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.293135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.293152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.293183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.302986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.303129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.303156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.303172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.303185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.303215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.312973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.313110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.313137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.313153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.313166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.313195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.323013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.323155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.323181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.323196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.323209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.323255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.333041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.333233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.333260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.333276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.333288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.333318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.343042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.343238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.102 [2024-07-15 19:19:57.343264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.102 [2024-07-15 19:19:57.343280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.102 [2024-07-15 19:19:57.343293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.102 [2024-07-15 19:19:57.343323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.102 qpair failed and we were unable to recover it. 00:25:17.102 [2024-07-15 19:19:57.353169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.102 [2024-07-15 19:19:57.353316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.353341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.353357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.353370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.353415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.363109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.363255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.363282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.363297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.363316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.363346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.373155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.373302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.373328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.373344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.373357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.373386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.383147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.383297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.383323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.383339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.383352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.383382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.393177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.393319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.393345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.393360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.393374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.393405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.403253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.403437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.403465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.403480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.403493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.403537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.413278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.413427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.413454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.413468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.413482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.413511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.423296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.423459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.423484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.423502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.423515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.423545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.433298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.433439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.433466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.433482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.433495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.433525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.443405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.443562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.443589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.443605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.443618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.443648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.103 [2024-07-15 19:19:57.453451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.103 [2024-07-15 19:19:57.453625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.103 [2024-07-15 19:19:57.453651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.103 [2024-07-15 19:19:57.453673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.103 [2024-07-15 19:19:57.453702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.103 [2024-07-15 19:19:57.453745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.103 qpair failed and we were unable to recover it. 00:25:17.104 [2024-07-15 19:19:57.463422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.104 [2024-07-15 19:19:57.463585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.104 [2024-07-15 19:19:57.463611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.104 [2024-07-15 19:19:57.463626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.104 [2024-07-15 19:19:57.463639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.104 [2024-07-15 19:19:57.463683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.104 qpair failed and we were unable to recover it. 00:25:17.104 [2024-07-15 19:19:57.473500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.104 [2024-07-15 19:19:57.473642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.104 [2024-07-15 19:19:57.473669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.104 [2024-07-15 19:19:57.473684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.104 [2024-07-15 19:19:57.473698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.104 [2024-07-15 19:19:57.473728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.104 qpair failed and we were unable to recover it. 00:25:17.104 [2024-07-15 19:19:57.483451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.104 [2024-07-15 19:19:57.483644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.104 [2024-07-15 19:19:57.483670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.104 [2024-07-15 19:19:57.483685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.104 [2024-07-15 19:19:57.483698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.104 [2024-07-15 19:19:57.483728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.104 qpair failed and we were unable to recover it. 00:25:17.104 [2024-07-15 19:19:57.493556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.104 [2024-07-15 19:19:57.493702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.104 [2024-07-15 19:19:57.493728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.104 [2024-07-15 19:19:57.493744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.104 [2024-07-15 19:19:57.493757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.104 [2024-07-15 19:19:57.493787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.104 qpair failed and we were unable to recover it. 00:25:17.104 [2024-07-15 19:19:57.503501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.104 [2024-07-15 19:19:57.503643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.104 [2024-07-15 19:19:57.503670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.104 [2024-07-15 19:19:57.503685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.104 [2024-07-15 19:19:57.503698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.104 [2024-07-15 19:19:57.503728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.104 qpair failed and we were unable to recover it. 00:25:17.104 [2024-07-15 19:19:57.513552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.104 [2024-07-15 19:19:57.513694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.104 [2024-07-15 19:19:57.513720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.104 [2024-07-15 19:19:57.513736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.104 [2024-07-15 19:19:57.513750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.104 [2024-07-15 19:19:57.513779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.104 qpair failed and we were unable to recover it. 00:25:17.104 [2024-07-15 19:19:57.523554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.104 [2024-07-15 19:19:57.523741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.104 [2024-07-15 19:19:57.523768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.104 [2024-07-15 19:19:57.523783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.104 [2024-07-15 19:19:57.523796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.104 [2024-07-15 19:19:57.523826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.104 qpair failed and we were unable to recover it. 00:25:17.387 [2024-07-15 19:19:57.533616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.387 [2024-07-15 19:19:57.533766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.387 [2024-07-15 19:19:57.533793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.387 [2024-07-15 19:19:57.533809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.387 [2024-07-15 19:19:57.533823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.387 [2024-07-15 19:19:57.533853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.387 qpair failed and we were unable to recover it. 00:25:17.387 [2024-07-15 19:19:57.543617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.387 [2024-07-15 19:19:57.543764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.387 [2024-07-15 19:19:57.543799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.387 [2024-07-15 19:19:57.543817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.387 [2024-07-15 19:19:57.543831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.387 [2024-07-15 19:19:57.543862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.387 qpair failed and we were unable to recover it. 00:25:17.387 [2024-07-15 19:19:57.553640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.387 [2024-07-15 19:19:57.553777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.387 [2024-07-15 19:19:57.553804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.387 [2024-07-15 19:19:57.553820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.387 [2024-07-15 19:19:57.553833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.387 [2024-07-15 19:19:57.553862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.387 qpair failed and we were unable to recover it. 00:25:17.387 [2024-07-15 19:19:57.563730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.387 [2024-07-15 19:19:57.563912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.387 [2024-07-15 19:19:57.563939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.387 [2024-07-15 19:19:57.563954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.387 [2024-07-15 19:19:57.563967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.387 [2024-07-15 19:19:57.563997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.387 qpair failed and we were unable to recover it. 00:25:17.387 [2024-07-15 19:19:57.573759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.387 [2024-07-15 19:19:57.573919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.387 [2024-07-15 19:19:57.573943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.387 [2024-07-15 19:19:57.573958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.387 [2024-07-15 19:19:57.573971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.387 [2024-07-15 19:19:57.574001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.387 qpair failed and we were unable to recover it. 00:25:17.387 [2024-07-15 19:19:57.583764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.387 [2024-07-15 19:19:57.583917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.387 [2024-07-15 19:19:57.583943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.387 [2024-07-15 19:19:57.583958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.583972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.584008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.593786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.593943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.593970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.593985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.593998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.594028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.603807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.603977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.604006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.604021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.604038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.604071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.613849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.614015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.614042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.614057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.614071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.614115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.623851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.624014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.624041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.624057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.624073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.624105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.633881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.634076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.634108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.634124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.634137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.634168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.643903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.644044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.644071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.644086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.644098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.644128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.653947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.654095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.654122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.654137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.654149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.654179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.664105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.664248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.664275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.664290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.664303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.664333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.673991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.674128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.674153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.674169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.674189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.674220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.684014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.684169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.684196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.684212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.684226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.684256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.694064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.694227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.694253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.694268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.694281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.694312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.704101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.704244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.704270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.704285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.704298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.704329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.714194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.714348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.714374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.714388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.714401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.388 [2024-07-15 19:19:57.714431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.388 qpair failed and we were unable to recover it. 00:25:17.388 [2024-07-15 19:19:57.724172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.388 [2024-07-15 19:19:57.724357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.388 [2024-07-15 19:19:57.724383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.388 [2024-07-15 19:19:57.724398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.388 [2024-07-15 19:19:57.724410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.724441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.734185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.734339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.734366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.734381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.734394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.734425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.744186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.744360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.744386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.744400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.744413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.744444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.754240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.754393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.754419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.754435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.754449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.754479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.764241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.764433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.764459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.764474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.764494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.764525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.774273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.774418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.774444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.774459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.774472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.774502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.784315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.784496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.784524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.784539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.784552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.784584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.794324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.794475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.794502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.794517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.794530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.794561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.804405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.804559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.804586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.804600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.804613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.804659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.389 [2024-07-15 19:19:57.814413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.389 [2024-07-15 19:19:57.814565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.389 [2024-07-15 19:19:57.814592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.389 [2024-07-15 19:19:57.814607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.389 [2024-07-15 19:19:57.814621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.389 [2024-07-15 19:19:57.814653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.389 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.824441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.824614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.824640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.824654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.824667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.824698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.834467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.834646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.834673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.834688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.834701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.834731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.844504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.844662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.844688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.844703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.844716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.844746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.854563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.854741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.854767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.854788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.854803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.854833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.864547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.864685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.864712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.864727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.864740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.864770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.874575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.874752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.874778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.874794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.874806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.874837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.884661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.884842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.884867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.884893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.884908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.884938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.894642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.894792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.894818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.894833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.894846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.894885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.904652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.904805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.904831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.648 [2024-07-15 19:19:57.904846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.648 [2024-07-15 19:19:57.904859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.648 [2024-07-15 19:19:57.904897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.648 qpair failed and we were unable to recover it. 00:25:17.648 [2024-07-15 19:19:57.914696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.648 [2024-07-15 19:19:57.914855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.648 [2024-07-15 19:19:57.914888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.914905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.914920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.914950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.924707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.924851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.924883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.924900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.924914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.924944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.934740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.934917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.934943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.934958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.934971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.935001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.944762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.944918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.944952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.944968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.944981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.945012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.954822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.954979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.955005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.955020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.955033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.955064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.964804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.964959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.964985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.964999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.965012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.965043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.974935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.975082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.975107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.975122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.975135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.975165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.984871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.985028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.985054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.985069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.985082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.985118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:57.994893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:57.995049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:57.995075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:57.995090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:57.995103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:57.995133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:58.004941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:58.005107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:58.005133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:58.005148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:58.005161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:58.005192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:58.015009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:58.015161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:58.015186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:58.015201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:58.015214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:58.015259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:58.024996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:58.025152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:58.025181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:58.025197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:58.025211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:58.025241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:58.035041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:58.035187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:58.035223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:58.035248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:58.035262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:58.035307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:58.045025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:58.045168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:58.045194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:58.045209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:58.045224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:58.045254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.649 qpair failed and we were unable to recover it. 00:25:17.649 [2024-07-15 19:19:58.055078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.649 [2024-07-15 19:19:58.055269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.649 [2024-07-15 19:19:58.055295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.649 [2024-07-15 19:19:58.055310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.649 [2024-07-15 19:19:58.055323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.649 [2024-07-15 19:19:58.055354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.650 qpair failed and we were unable to recover it. 00:25:17.650 [2024-07-15 19:19:58.065133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.650 [2024-07-15 19:19:58.065316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.650 [2024-07-15 19:19:58.065342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.650 [2024-07-15 19:19:58.065358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.650 [2024-07-15 19:19:58.065370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.650 [2024-07-15 19:19:58.065401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.650 qpair failed and we were unable to recover it. 00:25:17.650 [2024-07-15 19:19:58.075112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.650 [2024-07-15 19:19:58.075253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.650 [2024-07-15 19:19:58.075279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.650 [2024-07-15 19:19:58.075294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.650 [2024-07-15 19:19:58.075307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.650 [2024-07-15 19:19:58.075344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.650 qpair failed and we were unable to recover it. 00:25:17.908 [2024-07-15 19:19:58.085152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.908 [2024-07-15 19:19:58.085287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.908 [2024-07-15 19:19:58.085314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.908 [2024-07-15 19:19:58.085329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.908 [2024-07-15 19:19:58.085342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.908 [2024-07-15 19:19:58.085372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.908 qpair failed and we were unable to recover it. 00:25:17.908 [2024-07-15 19:19:58.095302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.908 [2024-07-15 19:19:58.095499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.908 [2024-07-15 19:19:58.095524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.908 [2024-07-15 19:19:58.095538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.908 [2024-07-15 19:19:58.095552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.908 [2024-07-15 19:19:58.095596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.908 qpair failed and we were unable to recover it. 00:25:17.908 [2024-07-15 19:19:58.105250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.908 [2024-07-15 19:19:58.105447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.908 [2024-07-15 19:19:58.105474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.908 [2024-07-15 19:19:58.105489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.908 [2024-07-15 19:19:58.105502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.908 [2024-07-15 19:19:58.105533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.908 qpair failed and we were unable to recover it. 00:25:17.908 [2024-07-15 19:19:58.115230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.908 [2024-07-15 19:19:58.115365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.908 [2024-07-15 19:19:58.115391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.908 [2024-07-15 19:19:58.115406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.908 [2024-07-15 19:19:58.115419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.908 [2024-07-15 19:19:58.115449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.908 qpair failed and we were unable to recover it. 00:25:17.908 [2024-07-15 19:19:58.125264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.908 [2024-07-15 19:19:58.125409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.908 [2024-07-15 19:19:58.125435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.908 [2024-07-15 19:19:58.125450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.908 [2024-07-15 19:19:58.125463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.908 [2024-07-15 19:19:58.125493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.908 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.135341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.135503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.135529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.135545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.135557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.135587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.145323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.145467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.145493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.145508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.145521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.145552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.155399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.155545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.155571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.155586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.155601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.155631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.165507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.165664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.165690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.165704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.165725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.165756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.175480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.175630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.175658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.175677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.175691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.175748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.185494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.185676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.185702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.185717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.185730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.185761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.195464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.195601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.195628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.195643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.195655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.195686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.205513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:17.909 [2024-07-15 19:19:58.205652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:17.909 [2024-07-15 19:19:58.205679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:17.909 [2024-07-15 19:19:58.205694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:17.909 [2024-07-15 19:19:58.205707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f96ac000b90 00:25:17.909 [2024-07-15 19:19:58.205737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:17.909 qpair failed and we were unable to recover it. 00:25:17.909 [2024-07-15 19:19:58.205805] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:17.909 A controller has encountered a failure and is being reset. 00:25:18.167 Controller properly reset. 00:25:18.427 Initializing NVMe Controllers 00:25:18.427 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:18.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:18.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:18.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:18.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:18.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:18.427 Initialization complete. Launching workers. 00:25:18.427 Starting thread on core 1 00:25:18.427 Starting thread on core 2 00:25:18.427 Starting thread on core 3 00:25:18.427 Starting thread on core 0 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:18.427 00:25:18.427 real 0m10.834s 00:25:18.427 user 0m18.496s 00:25:18.427 sys 0m5.613s 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:18.427 ************************************ 00:25:18.427 END TEST nvmf_target_disconnect_tc2 00:25:18.427 ************************************ 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.427 rmmod nvme_tcp 00:25:18.427 rmmod nvme_fabrics 00:25:18.427 rmmod nvme_keyring 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3412752 ']' 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3412752 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3412752 ']' 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3412752 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3412752 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3412752' 00:25:18.427 killing process with pid 3412752 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3412752 00:25:18.427 19:19:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3412752 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.994 19:19:59 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.903 19:20:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.903 00:25:20.903 real 0m15.542s 00:25:20.903 user 0m44.681s 00:25:20.903 sys 0m7.541s 00:25:20.903 19:20:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:20.903 19:20:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:20.903 ************************************ 00:25:20.903 END TEST nvmf_target_disconnect 00:25:20.903 ************************************ 00:25:20.903 19:20:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:20.903 19:20:01 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:20.903 19:20:01 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.903 19:20:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.903 19:20:01 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:20.903 00:25:20.903 real 19m38.710s 00:25:20.903 user 46m28.785s 00:25:20.903 sys 4m56.131s 00:25:20.903 19:20:01 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:20.903 19:20:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.903 ************************************ 00:25:20.903 END TEST nvmf_tcp 00:25:20.903 ************************************ 00:25:20.903 19:20:01 -- common/autotest_common.sh@1142 -- # return 0 00:25:20.903 19:20:01 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:20.903 19:20:01 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:20.903 19:20:01 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:20.903 19:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.903 19:20:01 -- common/autotest_common.sh@10 -- # set +x 00:25:20.903 ************************************ 00:25:20.903 START TEST spdkcli_nvmf_tcp 00:25:20.903 ************************************ 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:20.903 * Looking for test storage... 00:25:20.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.903 19:20:01 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3413951 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3413951 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3413951 ']' 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.160 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.160 [2024-07-15 19:20:01.385379] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:21.160 [2024-07-15 19:20:01.385457] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413951 ] 00:25:21.160 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.160 [2024-07-15 19:20:01.442330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:21.160 [2024-07-15 19:20:01.549096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.160 [2024-07-15 19:20:01.549101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.417 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.417 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.418 19:20:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:21.418 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:21.418 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:21.418 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:21.418 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:21.418 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:21.418 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:21.418 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:21.418 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:21.418 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:21.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:21.418 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:21.418 ' 00:25:23.945 [2024-07-15 19:20:04.205376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.359 [2024-07-15 19:20:05.429637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:27.264 [2024-07-15 19:20:07.688762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:29.797 [2024-07-15 19:20:09.626969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:30.735 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:30.735 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:30.735 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:30.735 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:30.735 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:30.735 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:30.735 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:30.735 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:30.735 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:30.735 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:30.735 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:30.735 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:30.993 19:20:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:31.250 19:20:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:31.508 19:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:31.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:31.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:31.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:31.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:31.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:31.508 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:31.508 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:31.508 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:31.508 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:31.508 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:31.508 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:31.508 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:31.508 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:31.508 ' 00:25:36.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:36.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:36.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:36.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:36.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:36.780 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:36.780 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:36.780 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:36.780 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:36.780 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:36.780 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:36.780 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:36.780 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:36.780 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3413951 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3413951 ']' 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3413951 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3413951 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3413951' 00:25:36.780 killing process with pid 3413951 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3413951 00:25:36.780 19:20:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3413951 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3413951 ']' 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3413951 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3413951 ']' 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3413951 00:25:37.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3413951) - No such process 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3413951 is not found' 00:25:37.038 Process with pid 3413951 is not found 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:37.038 00:25:37.038 real 0m15.967s 00:25:37.038 user 0m33.653s 00:25:37.038 sys 0m0.758s 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.038 19:20:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.038 ************************************ 00:25:37.038 END TEST spdkcli_nvmf_tcp 00:25:37.038 ************************************ 00:25:37.038 19:20:17 -- common/autotest_common.sh@1142 -- # return 0 00:25:37.038 19:20:17 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:37.038 19:20:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:37.038 19:20:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.038 19:20:17 -- common/autotest_common.sh@10 -- # set +x 00:25:37.038 ************************************ 00:25:37.038 START TEST nvmf_identify_passthru 00:25:37.038 ************************************ 00:25:37.038 19:20:17 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:37.038 * Looking for test storage... 00:25:37.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:37.038 19:20:17 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.038 19:20:17 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.038 19:20:17 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.038 19:20:17 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.038 19:20:17 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.038 19:20:17 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.038 19:20:17 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.038 19:20:17 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:37.038 19:20:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.038 19:20:17 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.038 19:20:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:37.038 19:20:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.038 19:20:17 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.038 19:20:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.939 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.939 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:38.939 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:38.940 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:38.940 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:38.940 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:38.940 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:38.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:25:38.940 00:25:38.940 --- 10.0.0.2 ping statistics --- 00:25:38.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.940 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:25:38.940 00:25:38.940 --- 10.0.0.1 ping statistics --- 00:25:38.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.940 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:38.940 19:20:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:38.940 19:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.940 19:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:38.940 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:39.200 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:39.200 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:39.200 19:20:19 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:39.200 19:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:39.200 19:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:39.200 19:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:39.200 19:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:39.200 19:20:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:39.200 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.392 19:20:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:43.392 19:20:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:43.392 19:20:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:43.392 19:20:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:43.392 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.583 19:20:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:47.583 19:20:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:47.583 19:20:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:47.583 19:20:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3418467 00:25:47.583 19:20:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:47.583 19:20:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:47.583 19:20:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3418467 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3418467 ']' 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.583 19:20:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:47.583 [2024-07-15 19:20:27.906529] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:47.583 [2024-07-15 19:20:27.906630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.583 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.584 [2024-07-15 19:20:27.973967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.873 [2024-07-15 19:20:28.094437] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.873 [2024-07-15 19:20:28.094502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.873 [2024-07-15 19:20:28.094519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.873 [2024-07-15 19:20:28.094533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.873 [2024-07-15 19:20:28.094545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.873 [2024-07-15 19:20:28.094605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.873 [2024-07-15 19:20:28.094674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.873 [2024-07-15 19:20:28.094763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.873 [2024-07-15 19:20:28.094766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.809 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.809 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:48.809 19:20:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:48.809 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.809 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:48.809 INFO: Log level set to 20 00:25:48.809 INFO: Requests: 00:25:48.809 { 00:25:48.809 "jsonrpc": "2.0", 00:25:48.809 "method": "nvmf_set_config", 00:25:48.809 "id": 1, 00:25:48.809 "params": { 00:25:48.809 "admin_cmd_passthru": { 00:25:48.809 "identify_ctrlr": true 00:25:48.809 } 00:25:48.809 } 00:25:48.809 } 00:25:48.809 00:25:48.809 INFO: response: 00:25:48.809 { 00:25:48.809 "jsonrpc": "2.0", 00:25:48.809 "id": 1, 00:25:48.809 "result": true 00:25:48.809 } 00:25:48.809 00:25:48.809 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.809 19:20:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:48.809 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.809 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:48.809 INFO: Setting log level to 20 00:25:48.810 INFO: Setting log level to 20 00:25:48.810 INFO: Log level set to 20 00:25:48.810 INFO: Log level set to 20 00:25:48.810 INFO: Requests: 00:25:48.810 { 00:25:48.810 "jsonrpc": "2.0", 00:25:48.810 "method": "framework_start_init", 00:25:48.810 "id": 1 00:25:48.810 } 00:25:48.810 00:25:48.810 INFO: Requests: 00:25:48.810 { 00:25:48.810 "jsonrpc": "2.0", 00:25:48.810 "method": "framework_start_init", 00:25:48.810 "id": 1 00:25:48.810 } 00:25:48.810 00:25:48.810 [2024-07-15 19:20:28.975079] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:48.810 INFO: response: 00:25:48.810 { 00:25:48.810 "jsonrpc": "2.0", 00:25:48.810 "id": 1, 00:25:48.810 "result": true 00:25:48.810 } 00:25:48.810 00:25:48.810 INFO: response: 00:25:48.810 { 00:25:48.810 "jsonrpc": "2.0", 00:25:48.810 "id": 1, 00:25:48.810 "result": true 00:25:48.810 } 00:25:48.810 00:25:48.810 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.810 19:20:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:48.810 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.810 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:48.810 INFO: Setting log level to 40 00:25:48.810 INFO: Setting log level to 40 00:25:48.810 INFO: Setting log level to 40 00:25:48.810 [2024-07-15 19:20:28.985039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.810 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.810 19:20:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:48.810 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.810 19:20:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:48.810 19:20:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:48.810 19:20:29 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.810 19:20:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:52.099 Nvme0n1 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.099 19:20:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.099 19:20:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.099 19:20:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:52.099 [2024-07-15 19:20:31.877481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.099 19:20:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:52.099 [ 00:25:52.099 { 00:25:52.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:52.099 "subtype": "Discovery", 00:25:52.099 "listen_addresses": [], 00:25:52.099 "allow_any_host": true, 00:25:52.099 "hosts": [] 00:25:52.099 }, 00:25:52.099 { 00:25:52.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.099 "subtype": "NVMe", 00:25:52.099 "listen_addresses": [ 00:25:52.099 { 00:25:52.099 "trtype": "TCP", 00:25:52.099 "adrfam": "IPv4", 00:25:52.099 "traddr": "10.0.0.2", 00:25:52.099 "trsvcid": "4420" 00:25:52.099 } 00:25:52.099 ], 00:25:52.099 "allow_any_host": true, 00:25:52.099 "hosts": [], 00:25:52.099 "serial_number": "SPDK00000000000001", 00:25:52.099 "model_number": "SPDK bdev Controller", 00:25:52.099 "max_namespaces": 1, 00:25:52.099 "min_cntlid": 1, 00:25:52.099 "max_cntlid": 65519, 00:25:52.099 "namespaces": [ 00:25:52.099 { 00:25:52.099 "nsid": 1, 00:25:52.099 "bdev_name": "Nvme0n1", 00:25:52.099 "name": "Nvme0n1", 00:25:52.099 "nguid": "93FF589FD59B4ADA9B45A613C1A870BC", 00:25:52.099 "uuid": "93ff589f-d59b-4ada-9b45-a613c1a870bc" 00:25:52.099 } 00:25:52.099 ] 00:25:52.099 } 00:25:52.099 ] 00:25:52.099 19:20:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.099 19:20:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:52.099 19:20:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:52.099 19:20:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:52.099 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:52.099 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:52.099 19:20:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:52.099 rmmod nvme_tcp 00:25:52.099 rmmod nvme_fabrics 00:25:52.099 rmmod nvme_keyring 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3418467 ']' 00:25:52.099 19:20:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3418467 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3418467 ']' 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3418467 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418467 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418467' 00:25:52.099 killing process with pid 3418467 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3418467 00:25:52.099 19:20:32 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3418467 00:25:54.000 19:20:34 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:54.000 19:20:34 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:54.000 19:20:34 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:54.000 19:20:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.000 19:20:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.000 19:20:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.000 19:20:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:54.000 19:20:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.905 19:20:36 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:55.905 00:25:55.905 real 0m18.777s 00:25:55.905 user 0m30.138s 00:25:55.905 sys 0m2.261s 00:25:55.905 19:20:36 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:55.905 19:20:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:55.905 ************************************ 00:25:55.905 END TEST nvmf_identify_passthru 00:25:55.905 ************************************ 00:25:55.905 19:20:36 -- common/autotest_common.sh@1142 -- # return 0 00:25:55.905 19:20:36 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:55.905 19:20:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:55.905 19:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:55.905 19:20:36 -- common/autotest_common.sh@10 -- # set +x 00:25:55.905 ************************************ 00:25:55.905 START TEST nvmf_dif 00:25:55.905 ************************************ 00:25:55.905 19:20:36 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:55.905 * Looking for test storage... 00:25:55.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:55.905 19:20:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.905 19:20:36 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.905 19:20:36 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.905 19:20:36 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.905 19:20:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.905 19:20:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.905 19:20:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.905 19:20:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:55.905 19:20:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:55.905 19:20:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:55.905 19:20:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:55.905 19:20:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:55.905 19:20:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:55.905 19:20:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.905 19:20:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:55.905 19:20:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:55.905 19:20:36 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:55.905 19:20:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.807 19:20:38 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:57.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:57.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:57.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:57.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:57.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:57.808 00:25:57.808 --- 10.0.0.2 ping statistics --- 00:25:57.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.808 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:25:57.808 00:25:57.808 --- 10.0.0.1 ping statistics --- 00:25:57.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.808 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:57.808 19:20:38 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:59.184 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:59.184 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:59.184 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:59.184 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:59.184 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:59.184 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:59.184 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:59.184 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:59.184 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:59.184 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:59.184 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:59.184 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:59.184 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:59.184 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:59.184 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:59.184 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:59.184 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.184 19:20:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:59.184 19:20:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3421742 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:59.184 19:20:39 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3421742 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3421742 ']' 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.184 19:20:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:59.184 [2024-07-15 19:20:39.543083] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:25:59.184 [2024-07-15 19:20:39.543173] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.184 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.184 [2024-07-15 19:20:39.609317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.444 [2024-07-15 19:20:39.729841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.444 [2024-07-15 19:20:39.729929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.444 [2024-07-15 19:20:39.729976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.444 [2024-07-15 19:20:39.729989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.444 [2024-07-15 19:20:39.729999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.444 [2024-07-15 19:20:39.730027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:00.382 19:20:40 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.382 19:20:40 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.382 19:20:40 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:00.382 19:20:40 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.382 [2024-07-15 19:20:40.553968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.382 19:20:40 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.382 19:20:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.382 ************************************ 00:26:00.382 START TEST fio_dif_1_default 00:26:00.382 ************************************ 00:26:00.382 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:00.382 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:00.382 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:00.382 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.382 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.382 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.383 bdev_null0 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.383 [2024-07-15 19:20:40.610217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.383 { 00:26:00.383 "params": { 00:26:00.383 "name": "Nvme$subsystem", 00:26:00.383 "trtype": "$TEST_TRANSPORT", 00:26:00.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.383 "adrfam": "ipv4", 00:26:00.383 "trsvcid": "$NVMF_PORT", 00:26:00.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.383 "hdgst": ${hdgst:-false}, 00:26:00.383 "ddgst": ${ddgst:-false} 00:26:00.383 }, 00:26:00.383 "method": "bdev_nvme_attach_controller" 00:26:00.383 } 00:26:00.383 EOF 00:26:00.383 )") 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:00.383 "params": { 00:26:00.383 "name": "Nvme0", 00:26:00.383 "trtype": "tcp", 00:26:00.383 "traddr": "10.0.0.2", 00:26:00.383 "adrfam": "ipv4", 00:26:00.383 "trsvcid": "4420", 00:26:00.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.383 "hdgst": false, 00:26:00.383 "ddgst": false 00:26:00.383 }, 00:26:00.383 "method": "bdev_nvme_attach_controller" 00:26:00.383 }' 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:00.383 19:20:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.642 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.642 fio-3.35 00:26:00.642 Starting 1 thread 00:26:00.642 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.841 00:26:12.841 filename0: (groupid=0, jobs=1): err= 0: pid=3422101: Mon Jul 15 19:20:51 2024 00:26:12.841 read: IOPS=96, BW=386KiB/s (395kB/s)(3856KiB/10001msec) 00:26:12.841 slat (nsec): min=4385, max=35052, avg=10747.69, stdev=4679.39 00:26:12.841 clat (usec): min=40892, max=47332, avg=41462.60, stdev=622.20 00:26:12.841 lat (usec): min=40900, max=47348, avg=41473.35, stdev=622.45 00:26:12.841 clat percentiles (usec): 00:26:12.841 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:12.841 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:26:12.841 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:12.841 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:26:12.841 | 99.99th=[47449] 00:26:12.841 bw ( KiB/s): min= 352, max= 416, per=99.85%, avg=385.68, stdev=12.95, samples=19 00:26:12.841 iops : min= 88, max= 104, avg=96.42, stdev= 3.24, samples=19 00:26:12.841 lat (msec) : 50=100.00% 00:26:12.841 cpu : usr=89.78%, sys=9.91%, ctx=10, majf=0, minf=227 00:26:12.841 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:12.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.841 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.841 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:12.841 00:26:12.841 Run status group 0 (all jobs): 00:26:12.841 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3856KiB (3949kB), run=10001-10001msec 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.841 00:26:12.841 real 0m11.046s 00:26:12.841 user 0m10.125s 00:26:12.841 sys 0m1.246s 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 ************************************ 00:26:12.841 END TEST fio_dif_1_default 00:26:12.841 ************************************ 00:26:12.841 19:20:51 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:12.841 19:20:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:12.841 19:20:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:12.841 19:20:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 ************************************ 00:26:12.841 START TEST fio_dif_1_multi_subsystems 00:26:12.841 ************************************ 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 bdev_null0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 [2024-07-15 19:20:51.711847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 bdev_null1 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.841 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:12.842 { 00:26:12.842 "params": { 00:26:12.842 "name": "Nvme$subsystem", 00:26:12.842 "trtype": "$TEST_TRANSPORT", 00:26:12.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.842 "adrfam": "ipv4", 00:26:12.842 "trsvcid": "$NVMF_PORT", 00:26:12.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.842 "hdgst": ${hdgst:-false}, 00:26:12.842 "ddgst": ${ddgst:-false} 00:26:12.842 }, 00:26:12.842 "method": "bdev_nvme_attach_controller" 00:26:12.842 } 00:26:12.842 EOF 00:26:12.842 )") 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:12.842 { 00:26:12.842 "params": { 00:26:12.842 "name": "Nvme$subsystem", 00:26:12.842 "trtype": "$TEST_TRANSPORT", 00:26:12.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.842 "adrfam": "ipv4", 00:26:12.842 "trsvcid": "$NVMF_PORT", 00:26:12.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.842 "hdgst": ${hdgst:-false}, 00:26:12.842 "ddgst": ${ddgst:-false} 00:26:12.842 }, 00:26:12.842 "method": "bdev_nvme_attach_controller" 00:26:12.842 } 00:26:12.842 EOF 00:26:12.842 )") 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:12.842 "params": { 00:26:12.842 "name": "Nvme0", 00:26:12.842 "trtype": "tcp", 00:26:12.842 "traddr": "10.0.0.2", 00:26:12.842 "adrfam": "ipv4", 00:26:12.842 "trsvcid": "4420", 00:26:12.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:12.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:12.842 "hdgst": false, 00:26:12.842 "ddgst": false 00:26:12.842 }, 00:26:12.842 "method": "bdev_nvme_attach_controller" 00:26:12.842 },{ 00:26:12.842 "params": { 00:26:12.842 "name": "Nvme1", 00:26:12.842 "trtype": "tcp", 00:26:12.842 "traddr": "10.0.0.2", 00:26:12.842 "adrfam": "ipv4", 00:26:12.842 "trsvcid": "4420", 00:26:12.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:12.842 "hdgst": false, 00:26:12.842 "ddgst": false 00:26:12.842 }, 00:26:12.842 "method": "bdev_nvme_attach_controller" 00:26:12.842 }' 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:12.842 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:12.843 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:12.843 19:20:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:12.843 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:12.843 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:12.843 fio-3.35 00:26:12.843 Starting 2 threads 00:26:12.843 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.834 00:26:22.834 filename0: (groupid=0, jobs=1): err= 0: pid=3423510: Mon Jul 15 19:21:02 2024 00:26:22.834 read: IOPS=189, BW=758KiB/s (777kB/s)(7600KiB/10020msec) 00:26:22.834 slat (nsec): min=4382, max=30184, avg=9564.10, stdev=2710.87 00:26:22.834 clat (usec): min=807, max=43463, avg=21063.89, stdev=20098.99 00:26:22.834 lat (usec): min=815, max=43489, avg=21073.46, stdev=20098.84 00:26:22.834 clat percentiles (usec): 00:26:22.834 | 1.00th=[ 848], 5.00th=[ 881], 10.00th=[ 889], 20.00th=[ 898], 00:26:22.834 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[40633], 60.00th=[41157], 00:26:22.834 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:22.834 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:26:22.834 | 99.99th=[43254] 00:26:22.834 bw ( KiB/s): min= 704, max= 768, per=57.61%, avg=758.40, stdev=18.28, samples=20 00:26:22.834 iops : min= 176, max= 192, avg=189.60, stdev= 4.57, samples=20 00:26:22.834 lat (usec) : 1000=45.00% 00:26:22.834 lat (msec) : 2=4.89%, 50=50.11% 00:26:22.834 cpu : usr=93.31%, sys=6.26%, ctx=45, majf=0, minf=150 00:26:22.834 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.834 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.834 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:22.834 filename1: (groupid=0, jobs=1): err= 0: pid=3423511: Mon Jul 15 19:21:02 2024 00:26:22.834 read: IOPS=139, BW=558KiB/s (571kB/s)(5584KiB/10014msec) 00:26:22.834 slat (nsec): min=4797, max=98800, avg=9702.90, stdev=4369.87 00:26:22.834 clat (usec): min=749, max=43328, avg=28661.24, stdev=19000.55 00:26:22.834 lat (usec): min=757, max=43341, avg=28670.95, stdev=19000.79 00:26:22.834 clat percentiles (usec): 00:26:22.834 | 1.00th=[ 758], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 857], 00:26:22.834 | 30.00th=[ 988], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:22.834 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:22.834 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:26:22.834 | 99.99th=[43254] 00:26:22.834 bw ( KiB/s): min= 352, max= 768, per=42.26%, avg=556.80, stdev=176.62, samples=20 00:26:22.834 iops : min= 88, max= 192, avg=139.20, stdev=44.15, samples=20 00:26:22.834 lat (usec) : 750=0.07%, 1000=30.59% 00:26:22.834 lat (msec) : 2=1.15%, 50=68.19% 00:26:22.834 cpu : usr=93.29%, sys=5.92%, ctx=21, majf=0, minf=134 00:26:22.834 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.834 issued rwts: total=1396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.834 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:22.834 00:26:22.834 Run status group 0 (all jobs): 00:26:22.834 READ: bw=1316KiB/s (1347kB/s), 558KiB/s-758KiB/s (571kB/s-777kB/s), io=12.9MiB (13.5MB), run=10014-10020msec 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.834 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.835 00:26:22.835 real 0m11.387s 00:26:22.835 user 0m20.163s 00:26:22.835 sys 0m1.546s 00:26:22.835 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:22.835 19:21:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 ************************************ 00:26:22.835 END TEST fio_dif_1_multi_subsystems 00:26:22.835 ************************************ 00:26:22.835 19:21:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:22.835 19:21:03 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:22.835 19:21:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:22.835 19:21:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.835 19:21:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 ************************************ 00:26:22.835 START TEST fio_dif_rand_params 00:26:22.835 ************************************ 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 bdev_null0 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.835 [2024-07-15 19:21:03.140061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.835 { 00:26:22.835 "params": { 00:26:22.835 "name": "Nvme$subsystem", 00:26:22.835 "trtype": "$TEST_TRANSPORT", 00:26:22.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.835 "adrfam": "ipv4", 00:26:22.835 "trsvcid": "$NVMF_PORT", 00:26:22.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.835 "hdgst": ${hdgst:-false}, 00:26:22.835 "ddgst": ${ddgst:-false} 00:26:22.835 }, 00:26:22.835 "method": "bdev_nvme_attach_controller" 00:26:22.835 } 00:26:22.835 EOF 00:26:22.835 )") 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:22.835 "params": { 00:26:22.835 "name": "Nvme0", 00:26:22.835 "trtype": "tcp", 00:26:22.835 "traddr": "10.0.0.2", 00:26:22.835 "adrfam": "ipv4", 00:26:22.835 "trsvcid": "4420", 00:26:22.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:22.835 "hdgst": false, 00:26:22.835 "ddgst": false 00:26:22.835 }, 00:26:22.835 "method": "bdev_nvme_attach_controller" 00:26:22.835 }' 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:22.835 19:21:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.091 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:23.091 ... 00:26:23.091 fio-3.35 00:26:23.091 Starting 3 threads 00:26:23.091 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.641 00:26:29.641 filename0: (groupid=0, jobs=1): err= 0: pid=3425021: Mon Jul 15 19:21:09 2024 00:26:29.641 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(114MiB/5008msec) 00:26:29.641 slat (nsec): min=6303, max=35558, avg=12494.32, stdev=3355.34 00:26:29.641 clat (usec): min=6982, max=92260, avg=16418.75, stdev=13039.14 00:26:29.641 lat (usec): min=6994, max=92274, avg=16431.24, stdev=13039.31 00:26:29.641 clat percentiles (usec): 00:26:29.641 | 1.00th=[ 7177], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[10028], 00:26:29.641 | 30.00th=[10814], 40.00th=[11338], 50.00th=[12256], 60.00th=[13566], 00:26:29.641 | 70.00th=[14746], 80.00th=[15664], 90.00th=[49021], 95.00th=[53740], 00:26:29.641 | 99.00th=[56886], 99.50th=[57410], 99.90th=[91751], 99.95th=[91751], 00:26:29.641 | 99.99th=[91751] 00:26:29.641 bw ( KiB/s): min=17920, max=33792, per=33.12%, avg=23321.60, stdev=5133.42, samples=10 00:26:29.641 iops : min= 140, max= 264, avg=182.20, stdev=40.10, samples=10 00:26:29.641 lat (msec) : 10=19.04%, 20=70.35%, 50=0.98%, 100=9.63% 00:26:29.641 cpu : usr=89.65%, sys=9.77%, ctx=20, majf=0, minf=106 00:26:29.641 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.641 issued rwts: total=914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:29.641 filename0: (groupid=0, jobs=1): err= 0: pid=3425022: Mon Jul 15 19:21:09 2024 00:26:29.641 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(110MiB/5045msec) 00:26:29.641 slat (nsec): min=7226, max=69251, avg=12322.92, stdev=3959.03 00:26:29.641 clat (usec): min=5695, max=94426, avg=17217.69, stdev=14715.89 00:26:29.641 lat (usec): min=5707, max=94438, avg=17230.01, stdev=14715.80 00:26:29.641 clat percentiles (usec): 00:26:29.641 | 1.00th=[ 6128], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 9503], 00:26:29.641 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11994], 60.00th=[13304], 00:26:29.641 | 70.00th=[14484], 80.00th=[16188], 90.00th=[51119], 95.00th=[53740], 00:26:29.641 | 99.00th=[57934], 99.50th=[58983], 99.90th=[94897], 99.95th=[94897], 00:26:29.641 | 99.99th=[94897] 00:26:29.641 bw ( KiB/s): min=12288, max=29952, per=31.82%, avg=22400.00, stdev=5700.43, samples=10 00:26:29.641 iops : min= 96, max= 234, avg=175.00, stdev=44.53, samples=10 00:26:29.641 lat (msec) : 10=28.25%, 20=58.09%, 50=1.82%, 100=11.85% 00:26:29.641 cpu : usr=90.70%, sys=8.86%, ctx=13, majf=0, minf=166 00:26:29.641 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.641 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:29.641 filename0: (groupid=0, jobs=1): err= 0: pid=3425023: Mon Jul 15 19:21:09 2024 00:26:29.641 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(123MiB/5032msec) 00:26:29.641 slat (nsec): min=6291, max=36969, avg=12336.32, stdev=3427.76 00:26:29.641 clat (usec): min=4906, max=92183, avg=15336.35, stdev=13031.12 00:26:29.641 lat (usec): min=4917, max=92195, avg=15348.68, stdev=13031.07 00:26:29.641 clat percentiles (usec): 00:26:29.641 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 8717], 00:26:29.641 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11469], 60.00th=[12518], 00:26:29.641 | 70.00th=[13435], 80.00th=[14877], 90.00th=[47449], 95.00th=[52167], 00:26:29.641 | 99.00th=[55837], 99.50th=[56886], 99.90th=[91751], 99.95th=[91751], 00:26:29.641 | 99.99th=[91751] 00:26:29.641 bw ( KiB/s): min=19200, max=31744, per=35.63%, avg=25088.00, stdev=4478.17, samples=10 00:26:29.641 iops : min= 150, max= 248, avg=196.00, stdev=34.99, samples=10 00:26:29.641 lat (msec) : 10=34.08%, 20=55.34%, 50=1.83%, 100=8.75% 00:26:29.641 cpu : usr=90.10%, sys=9.44%, ctx=14, majf=0, minf=76 00:26:29.641 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:29.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.641 issued rwts: total=983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:29.641 00:26:29.641 Run status group 0 (all jobs): 00:26:29.641 READ: bw=68.8MiB/s (72.1MB/s), 21.8MiB/s-24.4MiB/s (22.8MB/s-25.6MB/s), io=347MiB (364MB), run=5008-5045msec 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:29.641 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 bdev_null0 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 [2024-07-15 19:21:09.451583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 bdev_null1 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 bdev_null2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.642 { 00:26:29.642 "params": { 00:26:29.642 "name": "Nvme$subsystem", 00:26:29.642 "trtype": "$TEST_TRANSPORT", 00:26:29.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.642 "adrfam": "ipv4", 00:26:29.642 "trsvcid": "$NVMF_PORT", 00:26:29.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.642 "hdgst": ${hdgst:-false}, 00:26:29.642 "ddgst": ${ddgst:-false} 00:26:29.642 }, 00:26:29.642 "method": "bdev_nvme_attach_controller" 00:26:29.642 } 00:26:29.642 EOF 00:26:29.642 )") 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.642 { 00:26:29.642 "params": { 00:26:29.642 "name": "Nvme$subsystem", 00:26:29.642 "trtype": "$TEST_TRANSPORT", 00:26:29.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.642 "adrfam": "ipv4", 00:26:29.642 "trsvcid": "$NVMF_PORT", 00:26:29.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.642 "hdgst": ${hdgst:-false}, 00:26:29.642 "ddgst": ${ddgst:-false} 00:26:29.642 }, 00:26:29.642 "method": "bdev_nvme_attach_controller" 00:26:29.642 } 00:26:29.642 EOF 00:26:29.642 )") 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.642 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.642 { 00:26:29.642 "params": { 00:26:29.642 "name": "Nvme$subsystem", 00:26:29.642 "trtype": "$TEST_TRANSPORT", 00:26:29.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.642 "adrfam": "ipv4", 00:26:29.642 "trsvcid": "$NVMF_PORT", 00:26:29.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.643 "hdgst": ${hdgst:-false}, 00:26:29.643 "ddgst": ${ddgst:-false} 00:26:29.643 }, 00:26:29.643 "method": "bdev_nvme_attach_controller" 00:26:29.643 } 00:26:29.643 EOF 00:26:29.643 )") 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:29.643 "params": { 00:26:29.643 "name": "Nvme0", 00:26:29.643 "trtype": "tcp", 00:26:29.643 "traddr": "10.0.0.2", 00:26:29.643 "adrfam": "ipv4", 00:26:29.643 "trsvcid": "4420", 00:26:29.643 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.643 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.643 "hdgst": false, 00:26:29.643 "ddgst": false 00:26:29.643 }, 00:26:29.643 "method": "bdev_nvme_attach_controller" 00:26:29.643 },{ 00:26:29.643 "params": { 00:26:29.643 "name": "Nvme1", 00:26:29.643 "trtype": "tcp", 00:26:29.643 "traddr": "10.0.0.2", 00:26:29.643 "adrfam": "ipv4", 00:26:29.643 "trsvcid": "4420", 00:26:29.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.643 "hdgst": false, 00:26:29.643 "ddgst": false 00:26:29.643 }, 00:26:29.643 "method": "bdev_nvme_attach_controller" 00:26:29.643 },{ 00:26:29.643 "params": { 00:26:29.643 "name": "Nvme2", 00:26:29.643 "trtype": "tcp", 00:26:29.643 "traddr": "10.0.0.2", 00:26:29.643 "adrfam": "ipv4", 00:26:29.643 "trsvcid": "4420", 00:26:29.643 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:29.643 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:29.643 "hdgst": false, 00:26:29.643 "ddgst": false 00:26:29.643 }, 00:26:29.643 "method": "bdev_nvme_attach_controller" 00:26:29.643 }' 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:29.643 19:21:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.643 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:29.643 ... 00:26:29.643 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:29.643 ... 00:26:29.643 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:29.643 ... 00:26:29.643 fio-3.35 00:26:29.643 Starting 24 threads 00:26:29.643 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.841 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426387: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:26:41.842 slat (usec): min=8, max=120, avg=49.09, stdev=18.70 00:26:41.842 clat (usec): min=25756, max=44955, avg=33076.47, stdev=955.00 00:26:41.842 lat (usec): min=25838, max=45005, avg=33125.56, stdev=953.57 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.842 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:41.842 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.842 | 99.00th=[35390], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:26:41.842 | 99.99th=[44827] 00:26:41.842 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1906.53, stdev=58.73, samples=19 00:26:41.842 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:26:41.842 lat (msec) : 50=100.00% 00:26:41.842 cpu : usr=97.84%, sys=1.71%, ctx=16, majf=0, minf=9 00:26:41.842 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426388: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=477, BW=1908KiB/s (1954kB/s)(18.6MiB/10003msec) 00:26:41.842 slat (usec): min=8, max=107, avg=17.39, stdev=12.59 00:26:41.842 clat (usec): min=2830, max=92766, avg=33387.42, stdev=4340.60 00:26:41.842 lat (usec): min=2838, max=92816, avg=33404.81, stdev=4341.10 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[22414], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:41.842 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:26:41.842 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.842 | 99.00th=[40109], 99.50th=[55837], 99.90th=[90702], 99.95th=[91751], 00:26:41.842 | 99.99th=[92799] 00:26:41.842 bw ( KiB/s): min= 1648, max= 2032, per=4.14%, avg=1895.58, stdev=75.68, samples=19 00:26:41.842 iops : min= 412, max= 508, avg=473.89, stdev=18.92, samples=19 00:26:41.842 lat (msec) : 4=0.29%, 10=0.04%, 20=0.31%, 50=98.74%, 100=0.61% 00:26:41.842 cpu : usr=91.22%, sys=4.07%, ctx=206, majf=0, minf=9 00:26:41.842 IO depths : 1=2.8%, 2=8.9%, 4=24.4%, 8=54.1%, 16=9.7%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426389: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10010msec) 00:26:41.842 slat (usec): min=6, max=122, avg=17.69, stdev=11.78 00:26:41.842 clat (usec): min=12233, max=41568, avg=33260.91, stdev=1826.83 00:26:41.842 lat (usec): min=12307, max=41587, avg=33278.59, stdev=1826.38 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[30278], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:41.842 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:26:41.842 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:26:41.842 | 99.00th=[35914], 99.50th=[36439], 99.90th=[41681], 99.95th=[41681], 00:26:41.842 | 99.99th=[41681] 00:26:41.842 bw ( KiB/s): min= 1840, max= 2048, per=4.19%, avg=1917.60, stdev=37.89, samples=20 00:26:41.842 iops : min= 460, max= 512, avg=479.40, stdev= 9.47, samples=20 00:26:41.842 lat (msec) : 20=0.67%, 50=99.33% 00:26:41.842 cpu : usr=93.88%, sys=3.27%, ctx=269, majf=0, minf=9 00:26:41.842 IO depths : 1=0.1%, 2=0.1%, 4=6.6%, 8=80.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=89.0%, 8=5.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426390: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:26:41.842 slat (nsec): min=6219, max=75778, avg=35768.71, stdev=13026.75 00:26:41.842 clat (usec): min=11900, max=42838, avg=33070.67, stdev=1871.58 00:26:41.842 lat (usec): min=11912, max=42875, avg=33106.44, stdev=1871.89 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[29230], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:41.842 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:41.842 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.842 | 99.00th=[35914], 99.50th=[35914], 99.90th=[41157], 99.95th=[41157], 00:26:41.842 | 99.99th=[42730] 00:26:41.842 bw ( KiB/s): min= 1792, max= 2052, per=4.18%, avg=1913.80, stdev=51.00, samples=20 00:26:41.842 iops : min= 448, max= 513, avg=478.45, stdev=12.75, samples=20 00:26:41.842 lat (msec) : 20=0.67%, 50=99.33% 00:26:41.842 cpu : usr=84.72%, sys=6.72%, ctx=336, majf=0, minf=9 00:26:41.842 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426391: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10005msec) 00:26:41.842 slat (nsec): min=3849, max=86117, avg=41063.47, stdev=11979.57 00:26:41.842 clat (usec): min=25738, max=66902, avg=33184.18, stdev=2057.80 00:26:41.842 lat (usec): min=25767, max=66914, avg=33225.24, stdev=2056.33 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[32375], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.842 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:41.842 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:26:41.842 | 99.00th=[35390], 99.50th=[35914], 99.90th=[66847], 99.95th=[66847], 00:26:41.842 | 99.99th=[66847] 00:26:41.842 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1899.79, stdev=64.19, samples=19 00:26:41.842 iops : min= 416, max= 480, avg=474.95, stdev=16.05, samples=19 00:26:41.842 lat (msec) : 50=99.66%, 100=0.34% 00:26:41.842 cpu : usr=89.08%, sys=4.97%, ctx=263, majf=0, minf=9 00:26:41.842 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426392: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:26:41.842 slat (nsec): min=3993, max=84062, avg=40102.37, stdev=15186.02 00:26:41.842 clat (usec): min=18751, max=41289, avg=33131.68, stdev=1159.91 00:26:41.842 lat (usec): min=18756, max=41350, avg=33171.78, stdev=1160.29 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[31065], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:41.842 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:41.842 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.842 | 99.00th=[35914], 99.50th=[36963], 99.90th=[41157], 99.95th=[41157], 00:26:41.842 | 99.99th=[41157] 00:26:41.842 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1913.26, stdev=29.37, samples=19 00:26:41.842 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:26:41.842 lat (msec) : 20=0.33%, 50=99.67% 00:26:41.842 cpu : usr=92.19%, sys=4.63%, ctx=1004, majf=0, minf=9 00:26:41.842 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426393: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10008msec) 00:26:41.842 slat (nsec): min=3954, max=87633, avg=44481.31, stdev=11485.52 00:26:41.842 clat (usec): min=24329, max=69208, avg=33182.61, stdev=2218.99 00:26:41.842 lat (usec): min=24338, max=69225, avg=33227.09, stdev=2217.00 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.842 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:41.842 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:26:41.842 | 99.00th=[35390], 99.50th=[36439], 99.90th=[68682], 99.95th=[68682], 00:26:41.842 | 99.99th=[69731] 00:26:41.842 bw ( KiB/s): min= 1667, max= 2048, per=4.15%, avg=1899.95, stdev=76.57, samples=19 00:26:41.842 iops : min= 416, max= 512, avg=474.95, stdev=19.27, samples=19 00:26:41.842 lat (msec) : 50=99.66%, 100=0.34% 00:26:41.842 cpu : usr=93.66%, sys=3.40%, ctx=166, majf=0, minf=9 00:26:41.842 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename0: (groupid=0, jobs=1): err= 0: pid=3426394: Mon Jul 15 19:21:20 2024 00:26:41.842 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10010msec) 00:26:41.842 slat (nsec): min=8170, max=99224, avg=31042.81, stdev=10051.59 00:26:41.842 clat (usec): min=26260, max=43742, avg=33192.22, stdev=904.38 00:26:41.842 lat (usec): min=26269, max=43763, avg=33223.26, stdev=904.08 00:26:41.842 clat percentiles (usec): 00:26:41.842 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:41.842 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:41.842 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.842 | 99.00th=[35390], 99.50th=[36439], 99.90th=[43779], 99.95th=[43779], 00:26:41.842 | 99.99th=[43779] 00:26:41.842 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1906.53, stdev=40.36, samples=19 00:26:41.842 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:26:41.842 lat (msec) : 50=100.00% 00:26:41.842 cpu : usr=93.78%, sys=3.27%, ctx=62, majf=0, minf=9 00:26:41.842 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.842 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.842 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.842 filename1: (groupid=0, jobs=1): err= 0: pid=3426395: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.8MiB/10007msec) 00:26:41.843 slat (nsec): min=6930, max=82293, avg=22279.52, stdev=12045.40 00:26:41.843 clat (usec): min=10116, max=45807, avg=33019.18, stdev=2322.39 00:26:41.843 lat (usec): min=10127, max=45843, avg=33041.46, stdev=2322.85 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[18220], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:41.843 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:26:41.843 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.843 | 99.00th=[35914], 99.50th=[36439], 99.90th=[41681], 99.95th=[42206], 00:26:41.843 | 99.99th=[45876] 00:26:41.843 bw ( KiB/s): min= 1792, max= 2224, per=4.21%, avg=1929.26, stdev=77.16, samples=19 00:26:41.843 iops : min= 448, max= 556, avg=482.32, stdev=19.29, samples=19 00:26:41.843 lat (msec) : 20=1.20%, 50=98.80% 00:26:41.843 cpu : usr=97.97%, sys=1.46%, ctx=113, majf=0, minf=9 00:26:41.843 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=3426396: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:26:41.843 slat (nsec): min=8735, max=99097, avg=44774.98, stdev=13666.78 00:26:41.843 clat (usec): min=24662, max=43711, avg=33096.32, stdev=956.70 00:26:41.843 lat (usec): min=24671, max=43732, avg=33141.09, stdev=955.50 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.843 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:41.843 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:26:41.843 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:26:41.843 | 99.99th=[43779] 00:26:41.843 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1906.53, stdev=58.73, samples=19 00:26:41.843 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:26:41.843 lat (msec) : 50=100.00% 00:26:41.843 cpu : usr=93.72%, sys=3.35%, ctx=84, majf=0, minf=9 00:26:41.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=3426397: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:26:41.843 slat (usec): min=3, max=141, avg=36.65, stdev=18.28 00:26:41.843 clat (usec): min=11831, max=42070, avg=33091.48, stdev=1847.95 00:26:41.843 lat (usec): min=11842, max=42098, avg=33128.13, stdev=1847.49 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[31065], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:41.843 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:26:41.843 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.843 | 99.00th=[35390], 99.50th=[35914], 99.90th=[41681], 99.95th=[42206], 00:26:41.843 | 99.99th=[42206] 00:26:41.843 bw ( KiB/s): min= 1792, max= 2052, per=4.18%, avg=1913.80, stdev=51.00, samples=20 00:26:41.843 iops : min= 448, max= 513, avg=478.45, stdev=12.75, samples=20 00:26:41.843 lat (msec) : 20=0.67%, 50=99.33% 00:26:41.843 cpu : usr=95.34%, sys=2.64%, ctx=191, majf=0, minf=9 00:26:41.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=3426398: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:26:41.843 slat (usec): min=15, max=179, avg=48.73, stdev=31.38 00:26:41.843 clat (usec): min=25450, max=43954, avg=32943.91, stdev=950.12 00:26:41.843 lat (usec): min=25473, max=43972, avg=32992.65, stdev=954.58 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:41.843 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:41.843 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:26:41.843 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:26:41.843 | 99.99th=[43779] 00:26:41.843 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1906.53, stdev=58.73, samples=19 00:26:41.843 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:26:41.843 lat (msec) : 50=100.00% 00:26:41.843 cpu : usr=98.26%, sys=1.29%, ctx=13, majf=0, minf=9 00:26:41.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=3426399: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10010msec) 00:26:41.843 slat (usec): min=4, max=107, avg=45.45, stdev=17.98 00:26:41.843 clat (usec): min=18076, max=69563, avg=33235.16, stdev=2877.85 00:26:41.843 lat (usec): min=18086, max=69578, avg=33280.61, stdev=2876.93 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[24249], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.843 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:41.843 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.843 | 99.00th=[41681], 99.50th=[42730], 99.90th=[69731], 99.95th=[69731], 00:26:41.843 | 99.99th=[69731] 00:26:41.843 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1899.79, stdev=77.26, samples=19 00:26:41.843 iops : min= 416, max= 512, avg=474.95, stdev=19.31, samples=19 00:26:41.843 lat (msec) : 20=0.04%, 50=99.58%, 100=0.38% 00:26:41.843 cpu : usr=98.35%, sys=1.25%, ctx=15, majf=0, minf=9 00:26:41.843 IO depths : 1=2.6%, 2=8.9%, 4=25.0%, 8=53.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=3426400: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:26:41.843 slat (usec): min=4, max=113, avg=47.23, stdev=16.99 00:26:41.843 clat (usec): min=25658, max=62093, avg=33113.74, stdev=1807.82 00:26:41.843 lat (usec): min=25682, max=62107, avg=33160.97, stdev=1806.57 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.843 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:41.843 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:26:41.843 | 99.00th=[35390], 99.50th=[35914], 99.90th=[62129], 99.95th=[62129], 00:26:41.843 | 99.99th=[62129] 00:26:41.843 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.95, stdev=47.58, samples=19 00:26:41.843 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:26:41.843 lat (msec) : 50=99.66%, 100=0.34% 00:26:41.843 cpu : usr=98.17%, sys=1.43%, ctx=16, majf=0, minf=9 00:26:41.843 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=3426401: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.7MiB/10029msec) 00:26:41.843 slat (nsec): min=5263, max=62535, avg=26387.72, stdev=11527.25 00:26:41.843 clat (usec): min=18473, max=41432, avg=33244.10, stdev=1140.85 00:26:41.843 lat (usec): min=18478, max=41458, avg=33270.49, stdev=1141.24 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[30278], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:41.843 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:26:41.843 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.843 | 99.00th=[35390], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:26:41.843 | 99.99th=[41681] 00:26:41.843 bw ( KiB/s): min= 1808, max= 1920, per=4.18%, avg=1913.26, stdev=25.75, samples=19 00:26:41.843 iops : min= 452, max= 480, avg=478.32, stdev= 6.44, samples=19 00:26:41.843 lat (msec) : 20=0.33%, 50=99.67% 00:26:41.843 cpu : usr=98.21%, sys=1.38%, ctx=16, majf=0, minf=9 00:26:41.843 IO depths : 1=1.6%, 2=7.8%, 4=25.0%, 8=54.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:41.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.843 issued rwts: total=4798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.843 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.843 filename1: (groupid=0, jobs=1): err= 0: pid=3426402: Mon Jul 15 19:21:20 2024 00:26:41.843 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10007msec) 00:26:41.843 slat (nsec): min=4661, max=77136, avg=29230.69, stdev=11468.88 00:26:41.843 clat (usec): min=7683, max=63930, avg=33055.96, stdev=3127.85 00:26:41.843 lat (usec): min=7691, max=63947, avg=33085.19, stdev=3129.00 00:26:41.843 clat percentiles (usec): 00:26:41.843 | 1.00th=[21627], 5.00th=[31589], 10.00th=[32637], 20.00th=[32900], 00:26:41.843 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:26:41.843 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.843 | 99.00th=[36439], 99.50th=[54789], 99.90th=[63701], 99.95th=[63701], 00:26:41.843 | 99.99th=[63701] 00:26:41.843 bw ( KiB/s): min= 1715, max= 2144, per=4.19%, avg=1917.63, stdev=72.71, samples=19 00:26:41.843 iops : min= 428, max= 536, avg=479.37, stdev=18.29, samples=19 00:26:41.843 lat (msec) : 10=0.29%, 20=0.67%, 50=98.50%, 100=0.54% 00:26:41.844 cpu : usr=98.21%, sys=1.39%, ctx=15, majf=0, minf=9 00:26:41.844 IO depths : 1=1.1%, 2=6.9%, 4=23.4%, 8=57.0%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=94.0%, 8=0.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426403: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10002msec) 00:26:41.844 slat (nsec): min=4356, max=95609, avg=42495.35, stdev=15506.45 00:26:41.844 clat (usec): min=17418, max=63886, avg=33257.85, stdev=2706.16 00:26:41.844 lat (usec): min=17427, max=63901, avg=33300.35, stdev=2704.78 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[26608], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.844 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:26:41.844 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.844 | 99.00th=[45351], 99.50th=[55313], 99.90th=[63701], 99.95th=[63701], 00:26:41.844 | 99.99th=[63701] 00:26:41.844 bw ( KiB/s): min= 1667, max= 1968, per=4.14%, avg=1895.74, stdev=68.44, samples=19 00:26:41.844 iops : min= 416, max= 492, avg=473.89, stdev=17.25, samples=19 00:26:41.844 lat (msec) : 20=0.25%, 50=98.95%, 100=0.80% 00:26:41.844 cpu : usr=97.96%, sys=1.64%, ctx=19, majf=0, minf=9 00:26:41.844 IO depths : 1=5.2%, 2=10.6%, 4=23.0%, 8=53.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426404: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=476, BW=1906KiB/s (1951kB/s)(18.6MiB/10008msec) 00:26:41.844 slat (usec): min=4, max=127, avg=48.96, stdev=18.84 00:26:41.844 clat (usec): min=25586, max=69301, avg=33167.09, stdev=2197.46 00:26:41.844 lat (usec): min=25627, max=69317, avg=33216.04, stdev=2194.64 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:26:41.844 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:41.844 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:26:41.844 | 99.00th=[35390], 99.50th=[35914], 99.90th=[69731], 99.95th=[69731], 00:26:41.844 | 99.99th=[69731] 00:26:41.844 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1899.79, stdev=77.07, samples=19 00:26:41.844 iops : min= 416, max= 512, avg=474.95, stdev=19.27, samples=19 00:26:41.844 lat (msec) : 50=99.66%, 100=0.34% 00:26:41.844 cpu : usr=98.07%, sys=1.53%, ctx=26, majf=0, minf=9 00:26:41.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426405: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10004msec) 00:26:41.844 slat (usec): min=4, max=116, avg=48.93, stdev=18.35 00:26:41.844 clat (usec): min=25671, max=65202, avg=33119.98, stdev=1973.46 00:26:41.844 lat (usec): min=25717, max=65218, avg=33168.91, stdev=1971.55 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:41.844 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:26:41.844 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:26:41.844 | 99.00th=[35390], 99.50th=[35914], 99.90th=[65274], 99.95th=[65274], 00:26:41.844 | 99.99th=[65274] 00:26:41.844 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1899.79, stdev=64.19, samples=19 00:26:41.844 iops : min= 416, max= 480, avg=474.95, stdev=16.05, samples=19 00:26:41.844 lat (msec) : 50=99.66%, 100=0.34% 00:26:41.844 cpu : usr=98.11%, sys=1.49%, ctx=14, majf=0, minf=9 00:26:41.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426406: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10005msec) 00:26:41.844 slat (usec): min=6, max=105, avg=35.83, stdev=18.70 00:26:41.844 clat (usec): min=5946, max=95966, avg=33395.14, stdev=4205.52 00:26:41.844 lat (usec): min=5955, max=95981, avg=33430.97, stdev=4204.27 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[26608], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:41.844 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:26:41.844 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.844 | 99.00th=[43254], 99.50th=[54264], 99.90th=[93848], 99.95th=[93848], 00:26:41.844 | 99.99th=[95945] 00:26:41.844 bw ( KiB/s): min= 1587, max= 1920, per=4.14%, avg=1894.05, stdev=77.76, samples=19 00:26:41.844 iops : min= 396, max= 480, avg=473.47, stdev=19.61, samples=19 00:26:41.844 lat (msec) : 10=0.21%, 20=0.13%, 50=98.95%, 100=0.72% 00:26:41.844 cpu : usr=98.20%, sys=1.41%, ctx=15, majf=0, minf=9 00:26:41.844 IO depths : 1=0.1%, 2=5.1%, 4=20.4%, 8=60.9%, 16=13.5%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=93.4%, 8=2.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426407: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10004msec) 00:26:41.844 slat (nsec): min=5154, max=88645, avg=25418.37, stdev=17162.90 00:26:41.844 clat (usec): min=13713, max=96471, avg=33526.08, stdev=4041.92 00:26:41.844 lat (usec): min=13749, max=96486, avg=33551.50, stdev=4040.47 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[26084], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:41.844 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:26:41.844 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.844 | 99.00th=[42730], 99.50th=[55313], 99.90th=[94897], 99.95th=[94897], 00:26:41.844 | 99.99th=[96994] 00:26:41.844 bw ( KiB/s): min= 1648, max= 1920, per=4.14%, avg=1893.89, stdev=65.58, samples=19 00:26:41.844 iops : min= 412, max= 480, avg=473.47, stdev=16.40, samples=19 00:26:41.844 lat (msec) : 20=0.38%, 50=99.07%, 100=0.55% 00:26:41.844 cpu : usr=95.67%, sys=2.49%, ctx=77, majf=0, minf=9 00:26:41.844 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=74.5%, 16=16.9%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=90.7%, 8=7.8%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426408: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=478, BW=1915KiB/s (1961kB/s)(18.8MiB/10026msec) 00:26:41.844 slat (nsec): min=5421, max=69590, avg=31455.76, stdev=10085.22 00:26:41.844 clat (usec): min=18379, max=41399, avg=33154.32, stdev=1173.05 00:26:41.844 lat (usec): min=18385, max=41442, avg=33185.77, stdev=1173.82 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[28967], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:41.844 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:26:41.844 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.844 | 99.00th=[35390], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:26:41.844 | 99.99th=[41157] 00:26:41.844 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1913.26, stdev=29.37, samples=19 00:26:41.844 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:26:41.844 lat (msec) : 20=0.33%, 50=99.67% 00:26:41.844 cpu : usr=98.32%, sys=1.29%, ctx=16, majf=0, minf=9 00:26:41.844 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426409: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:26:41.844 slat (usec): min=9, max=200, avg=49.29, stdev=30.64 00:26:41.844 clat (usec): min=25439, max=44978, avg=32944.05, stdev=952.78 00:26:41.844 lat (usec): min=25458, max=45012, avg=32993.34, stdev=956.70 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:26:41.844 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:26:41.844 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:26:41.844 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43779], 99.95th=[44827], 00:26:41.844 | 99.99th=[44827] 00:26:41.844 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1906.53, stdev=58.73, samples=19 00:26:41.844 iops : min= 448, max= 512, avg=476.63, stdev=14.68, samples=19 00:26:41.844 lat (msec) : 50=100.00% 00:26:41.844 cpu : usr=98.36%, sys=1.19%, ctx=14, majf=0, minf=9 00:26:41.844 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.844 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.844 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.844 filename2: (groupid=0, jobs=1): err= 0: pid=3426410: Mon Jul 15 19:21:20 2024 00:26:41.844 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10007msec) 00:26:41.844 slat (usec): min=8, max=112, avg=26.58, stdev=14.40 00:26:41.844 clat (usec): min=11995, max=41451, avg=33129.10, stdev=1794.73 00:26:41.844 lat (usec): min=12017, max=41540, avg=33155.69, stdev=1793.86 00:26:41.844 clat percentiles (usec): 00:26:41.844 | 1.00th=[30540], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:26:41.844 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:26:41.845 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:26:41.845 | 99.00th=[35390], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:26:41.845 | 99.99th=[41681] 00:26:41.845 bw ( KiB/s): min= 1920, max= 1920, per=4.19%, avg=1920.00, stdev= 0.00, samples=19 00:26:41.845 iops : min= 480, max= 480, avg=480.00, stdev= 0.00, samples=19 00:26:41.845 lat (msec) : 20=0.67%, 50=99.33% 00:26:41.845 cpu : usr=98.17%, sys=1.42%, ctx=15, majf=0, minf=9 00:26:41.845 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:41.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.845 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.845 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:41.845 00:26:41.845 Run status group 0 (all jobs): 00:26:41.845 READ: bw=44.7MiB/s (46.9MB/s), 1901KiB/s-1927KiB/s (1946kB/s-1974kB/s), io=448MiB (470MB), run=10002-10029msec 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 bdev_null0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 [2024-07-15 19:21:21.247890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 bdev_null1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.845 { 00:26:41.845 "params": { 00:26:41.845 "name": "Nvme$subsystem", 00:26:41.845 "trtype": "$TEST_TRANSPORT", 00:26:41.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.845 "adrfam": "ipv4", 00:26:41.845 "trsvcid": "$NVMF_PORT", 00:26:41.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.845 "hdgst": ${hdgst:-false}, 00:26:41.845 "ddgst": ${ddgst:-false} 00:26:41.845 }, 00:26:41.845 "method": "bdev_nvme_attach_controller" 00:26:41.845 } 00:26:41.845 EOF 00:26:41.845 )") 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.845 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:41.846 { 00:26:41.846 "params": { 00:26:41.846 "name": "Nvme$subsystem", 00:26:41.846 "trtype": "$TEST_TRANSPORT", 00:26:41.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:41.846 "adrfam": "ipv4", 00:26:41.846 "trsvcid": "$NVMF_PORT", 00:26:41.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:41.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:41.846 "hdgst": ${hdgst:-false}, 00:26:41.846 "ddgst": ${ddgst:-false} 00:26:41.846 }, 00:26:41.846 "method": "bdev_nvme_attach_controller" 00:26:41.846 } 00:26:41.846 EOF 00:26:41.846 )") 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:41.846 "params": { 00:26:41.846 "name": "Nvme0", 00:26:41.846 "trtype": "tcp", 00:26:41.846 "traddr": "10.0.0.2", 00:26:41.846 "adrfam": "ipv4", 00:26:41.846 "trsvcid": "4420", 00:26:41.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:41.846 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:41.846 "hdgst": false, 00:26:41.846 "ddgst": false 00:26:41.846 }, 00:26:41.846 "method": "bdev_nvme_attach_controller" 00:26:41.846 },{ 00:26:41.846 "params": { 00:26:41.846 "name": "Nvme1", 00:26:41.846 "trtype": "tcp", 00:26:41.846 "traddr": "10.0.0.2", 00:26:41.846 "adrfam": "ipv4", 00:26:41.846 "trsvcid": "4420", 00:26:41.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:41.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:41.846 "hdgst": false, 00:26:41.846 "ddgst": false 00:26:41.846 }, 00:26:41.846 "method": "bdev_nvme_attach_controller" 00:26:41.846 }' 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:41.846 19:21:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:41.846 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:41.846 ... 00:26:41.846 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:41.846 ... 00:26:41.846 fio-3.35 00:26:41.846 Starting 4 threads 00:26:41.846 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.108 00:26:47.108 filename0: (groupid=0, jobs=1): err= 0: pid=3427673: Mon Jul 15 19:21:27 2024 00:26:47.108 read: IOPS=2070, BW=16.2MiB/s (17.0MB/s)(80.9MiB/5002msec) 00:26:47.108 slat (nsec): min=3890, max=30218, avg=10771.33, stdev=3292.01 00:26:47.108 clat (usec): min=3009, max=51893, avg=3830.84, stdev=1504.67 00:26:47.108 lat (usec): min=3017, max=51905, avg=3841.61, stdev=1504.15 00:26:47.108 clat percentiles (usec): 00:26:47.108 | 1.00th=[ 3064], 5.00th=[ 3163], 10.00th=[ 3294], 20.00th=[ 3359], 00:26:47.108 | 30.00th=[ 3392], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3589], 00:26:47.108 | 70.00th=[ 3654], 80.00th=[ 3884], 90.00th=[ 5145], 95.00th=[ 5211], 00:26:47.108 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 6718], 99.95th=[51643], 00:26:47.108 | 99.99th=[51643] 00:26:47.108 bw ( KiB/s): min=14877, max=16848, per=26.40%, avg=16559.70, stdev=615.66, samples=10 00:26:47.108 iops : min= 1859, max= 2106, avg=2069.90, stdev=77.15, samples=10 00:26:47.108 lat (msec) : 4=80.65%, 10=19.27%, 100=0.08% 00:26:47.108 cpu : usr=94.06%, sys=5.46%, ctx=5, majf=0, minf=0 00:26:47.108 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 issued rwts: total=10356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.108 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:47.108 filename0: (groupid=0, jobs=1): err= 0: pid=3427674: Mon Jul 15 19:21:27 2024 00:26:47.108 read: IOPS=2087, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5001msec) 00:26:47.108 slat (usec): min=4, max=215, avg=12.73, stdev= 4.41 00:26:47.108 clat (usec): min=2302, max=8720, avg=3792.68, stdev=710.55 00:26:47.108 lat (usec): min=2310, max=8734, avg=3805.41, stdev=709.18 00:26:47.108 clat percentiles (usec): 00:26:47.108 | 1.00th=[ 3032], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3326], 00:26:47.108 | 30.00th=[ 3392], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3621], 00:26:47.108 | 70.00th=[ 3654], 80.00th=[ 3884], 90.00th=[ 5145], 95.00th=[ 5211], 00:26:47.108 | 99.00th=[ 5473], 99.50th=[ 5735], 99.90th=[ 6456], 99.95th=[ 8586], 00:26:47.108 | 99.99th=[ 8717] 00:26:47.108 bw ( KiB/s): min=16288, max=16848, per=26.69%, avg=16744.89, stdev=174.19, samples=9 00:26:47.108 iops : min= 2036, max= 2106, avg=2093.11, stdev=21.77, samples=9 00:26:47.108 lat (msec) : 4=80.48%, 10=19.52% 00:26:47.108 cpu : usr=92.70%, sys=6.34%, ctx=15, majf=0, minf=9 00:26:47.108 IO depths : 1=0.1%, 2=0.1%, 4=72.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 issued rwts: total=10442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.108 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:47.108 filename1: (groupid=0, jobs=1): err= 0: pid=3427675: Mon Jul 15 19:21:27 2024 00:26:47.108 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5004msec) 00:26:47.108 slat (nsec): min=4272, max=33091, avg=10431.29, stdev=2768.95 00:26:47.108 clat (usec): min=2045, max=10631, avg=4325.70, stdev=278.26 00:26:47.108 lat (usec): min=2059, max=10648, avg=4336.13, stdev=278.04 00:26:47.108 clat percentiles (usec): 00:26:47.108 | 1.00th=[ 3392], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4178], 00:26:47.108 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:26:47.108 | 70.00th=[ 4424], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4490], 00:26:47.108 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5604], 99.95th=[10421], 00:26:47.108 | 99.99th=[10683] 00:26:47.108 bw ( KiB/s): min=14592, max=15024, per=23.45%, avg=14707.20, stdev=118.73, samples=10 00:26:47.108 iops : min= 1824, max= 1878, avg=1838.40, stdev=14.84, samples=10 00:26:47.108 lat (msec) : 4=2.76%, 10=97.15%, 20=0.09% 00:26:47.108 cpu : usr=94.52%, sys=4.96%, ctx=11, majf=0, minf=0 00:26:47.108 IO depths : 1=0.1%, 2=0.2%, 4=62.7%, 8=37.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 issued rwts: total=9200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.108 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:47.108 filename1: (groupid=0, jobs=1): err= 0: pid=3427676: Mon Jul 15 19:21:27 2024 00:26:47.108 read: IOPS=1847, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5001msec) 00:26:47.108 slat (nsec): min=4242, max=33003, avg=11773.01, stdev=3296.68 00:26:47.108 clat (usec): min=1396, max=8048, avg=4303.62, stdev=262.34 00:26:47.108 lat (usec): min=1404, max=8061, avg=4315.39, stdev=262.28 00:26:47.108 clat percentiles (usec): 00:26:47.108 | 1.00th=[ 3163], 5.00th=[ 4080], 10.00th=[ 4146], 20.00th=[ 4178], 00:26:47.108 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4359], 00:26:47.108 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4490], 95.00th=[ 4490], 00:26:47.108 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[ 7898], 00:26:47.108 | 99.99th=[ 8029] 00:26:47.108 bw ( KiB/s): min=14592, max=15262, per=23.49%, avg=14734.00, stdev=203.07, samples=9 00:26:47.108 iops : min= 1824, max= 1907, avg=1841.67, stdev=25.14, samples=9 00:26:47.108 lat (msec) : 2=0.03%, 4=4.06%, 10=95.91% 00:26:47.108 cpu : usr=94.56%, sys=4.92%, ctx=8, majf=0, minf=9 00:26:47.108 IO depths : 1=0.1%, 2=0.4%, 4=62.4%, 8=37.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.108 issued rwts: total=9238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.108 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:47.108 00:26:47.108 Run status group 0 (all jobs): 00:26:47.108 READ: bw=61.3MiB/s (64.2MB/s), 14.4MiB/s-16.3MiB/s (15.1MB/s-17.1MB/s), io=307MiB (321MB), run=5001-5004msec 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.108 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.109 00:26:47.109 real 0m24.390s 00:26:47.109 user 4m27.071s 00:26:47.109 sys 0m9.256s 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.109 19:21:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:47.109 ************************************ 00:26:47.109 END TEST fio_dif_rand_params 00:26:47.109 ************************************ 00:26:47.109 19:21:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:47.109 19:21:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:47.109 19:21:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:47.109 19:21:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.109 19:21:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:47.367 ************************************ 00:26:47.367 START TEST fio_dif_digest 00:26:47.367 ************************************ 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.367 bdev_null0 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.367 [2024-07-15 19:21:27.576859] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.367 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:47.368 { 00:26:47.368 "params": { 00:26:47.368 "name": "Nvme$subsystem", 00:26:47.368 "trtype": "$TEST_TRANSPORT", 00:26:47.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.368 "adrfam": "ipv4", 00:26:47.368 "trsvcid": "$NVMF_PORT", 00:26:47.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.368 "hdgst": ${hdgst:-false}, 00:26:47.368 "ddgst": ${ddgst:-false} 00:26:47.368 }, 00:26:47.368 "method": "bdev_nvme_attach_controller" 00:26:47.368 } 00:26:47.368 EOF 00:26:47.368 )") 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:47.368 "params": { 00:26:47.368 "name": "Nvme0", 00:26:47.368 "trtype": "tcp", 00:26:47.368 "traddr": "10.0.0.2", 00:26:47.368 "adrfam": "ipv4", 00:26:47.368 "trsvcid": "4420", 00:26:47.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:47.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:47.368 "hdgst": true, 00:26:47.368 "ddgst": true 00:26:47.368 }, 00:26:47.368 "method": "bdev_nvme_attach_controller" 00:26:47.368 }' 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:47.368 19:21:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:47.626 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:47.626 ... 00:26:47.626 fio-3.35 00:26:47.626 Starting 3 threads 00:26:47.626 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.854 00:26:59.854 filename0: (groupid=0, jobs=1): err= 0: pid=3428548: Mon Jul 15 19:21:38 2024 00:26:59.854 read: IOPS=177, BW=22.2MiB/s (23.3MB/s)(223MiB/10022msec) 00:26:59.854 slat (nsec): min=4299, max=48192, avg=19988.46, stdev=4291.48 00:26:59.854 clat (usec): min=8714, max=97859, avg=16844.74, stdev=8221.97 00:26:59.854 lat (usec): min=8739, max=97879, avg=16864.73, stdev=8222.22 00:26:59.854 clat percentiles (usec): 00:26:59.854 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10945], 20.00th=[14091], 00:26:59.854 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15926], 60.00th=[16319], 00:26:59.854 | 70.00th=[16712], 80.00th=[17433], 90.00th=[18220], 95.00th=[19268], 00:26:59.854 | 99.00th=[57934], 99.50th=[58459], 99.90th=[61080], 99.95th=[98042], 00:26:59.854 | 99.99th=[98042] 00:26:59.854 bw ( KiB/s): min=18944, max=26880, per=30.16%, avg=22773.45, stdev=2081.95, samples=20 00:26:59.854 iops : min= 148, max= 210, avg=177.90, stdev=16.27, samples=20 00:26:59.854 lat (msec) : 10=2.19%, 20=93.94%, 50=0.22%, 100=3.65% 00:26:59.854 cpu : usr=92.77%, sys=6.11%, ctx=22, majf=0, minf=86 00:26:59.854 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.854 issued rwts: total=1782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:59.854 filename0: (groupid=0, jobs=1): err= 0: pid=3428549: Mon Jul 15 19:21:38 2024 00:26:59.854 read: IOPS=246, BW=30.9MiB/s (32.4MB/s)(310MiB/10043msec) 00:26:59.854 slat (nsec): min=4535, max=53875, avg=16658.66, stdev=3940.59 00:26:59.854 clat (usec): min=5990, max=47160, avg=12110.95, stdev=2119.23 00:26:59.854 lat (usec): min=6004, max=47174, avg=12127.61, stdev=2119.50 00:26:59.854 clat percentiles (usec): 00:26:59.854 | 1.00th=[ 6587], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10290], 00:26:59.854 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:26:59.854 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:26:59.854 | 99.00th=[15401], 99.50th=[15533], 99.90th=[18482], 99.95th=[43779], 00:26:59.854 | 99.99th=[46924] 00:26:59.854 bw ( KiB/s): min=27904, max=34560, per=42.01%, avg=31718.40, stdev=1652.61, samples=20 00:26:59.854 iops : min= 218, max= 270, avg=247.80, stdev=12.91, samples=20 00:26:59.854 lat (msec) : 10=16.61%, 20=83.31%, 50=0.08% 00:26:59.854 cpu : usr=94.29%, sys=5.09%, ctx=37, majf=0, minf=176 00:26:59.854 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.854 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:59.854 filename0: (groupid=0, jobs=1): err= 0: pid=3428550: Mon Jul 15 19:21:38 2024 00:26:59.854 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(208MiB/10025msec) 00:26:59.854 slat (usec): min=4, max=176, avg=17.91, stdev= 5.76 00:26:59.854 clat (usec): min=10022, max=97853, avg=18071.55, stdev=9884.46 00:26:59.854 lat (usec): min=10036, max=97871, avg=18089.46, stdev=9884.48 00:26:59.854 clat percentiles (usec): 00:26:59.854 | 1.00th=[11207], 5.00th=[12649], 10.00th=[13566], 20.00th=[14615], 00:26:59.854 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15926], 60.00th=[16319], 00:26:59.854 | 70.00th=[16712], 80.00th=[17171], 90.00th=[18220], 95.00th=[54789], 00:26:59.854 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60556], 99.95th=[98042], 00:26:59.854 | 99.99th=[98042] 00:26:59.854 bw ( KiB/s): min=16896, max=25856, per=28.12%, avg=21235.20, stdev=2486.84, samples=20 00:26:59.854 iops : min= 132, max= 202, avg=165.90, stdev=19.43, samples=20 00:26:59.854 lat (msec) : 20=93.86%, 50=0.48%, 100=5.66% 00:26:59.854 cpu : usr=94.20%, sys=5.21%, ctx=28, majf=0, minf=195 00:26:59.854 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.854 issued rwts: total=1662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.854 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:59.854 00:26:59.855 Run status group 0 (all jobs): 00:26:59.855 READ: bw=73.7MiB/s (77.3MB/s), 20.7MiB/s-30.9MiB/s (21.7MB/s-32.4MB/s), io=741MiB (776MB), run=10022-10043msec 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.855 00:26:59.855 real 0m11.205s 00:26:59.855 user 0m29.323s 00:26:59.855 sys 0m1.907s 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.855 19:21:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:59.855 ************************************ 00:26:59.855 END TEST fio_dif_digest 00:26:59.855 ************************************ 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:59.855 19:21:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:59.855 19:21:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:59.855 rmmod nvme_tcp 00:26:59.855 rmmod nvme_fabrics 00:26:59.855 rmmod nvme_keyring 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3421742 ']' 00:26:59.855 19:21:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3421742 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3421742 ']' 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3421742 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3421742 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3421742' 00:26:59.855 killing process with pid 3421742 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3421742 00:26:59.855 19:21:38 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3421742 00:26:59.855 19:21:39 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:59.855 19:21:39 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:59.855 Waiting for block devices as requested 00:26:59.855 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:00.112 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:00.112 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:00.370 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:00.370 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:00.370 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:00.370 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:00.628 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:00.628 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:00.628 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:00.628 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:00.886 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:00.886 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:00.886 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:00.886 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:01.145 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:01.145 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:01.145 19:21:41 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.145 19:21:41 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.145 19:21:41 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.145 19:21:41 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.145 19:21:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.145 19:21:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:01.145 19:21:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.689 19:21:43 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.689 00:27:03.689 real 1m7.496s 00:27:03.689 user 6m24.287s 00:27:03.689 sys 0m21.031s 00:27:03.689 19:21:43 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:03.689 19:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:03.689 ************************************ 00:27:03.689 END TEST nvmf_dif 00:27:03.689 ************************************ 00:27:03.689 19:21:43 -- common/autotest_common.sh@1142 -- # return 0 00:27:03.689 19:21:43 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:03.689 19:21:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:03.689 19:21:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.689 19:21:43 -- common/autotest_common.sh@10 -- # set +x 00:27:03.689 ************************************ 00:27:03.689 START TEST nvmf_abort_qd_sizes 00:27:03.689 ************************************ 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:03.689 * Looking for test storage... 00:27:03.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:03.689 19:21:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:05.589 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:05.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:05.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:05.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:05.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:27:05.590 00:27:05.590 --- 10.0.0.2 ping statistics --- 00:27:05.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.590 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:27:05.590 00:27:05.590 --- 10.0.0.1 ping statistics --- 00:27:05.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.590 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:05.590 19:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:06.522 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:06.522 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:06.522 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:06.522 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:06.522 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:06.522 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:06.522 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:06.522 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:06.522 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:07.455 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3433336 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3433336 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3433336 ']' 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.712 19:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:07.712 [2024-07-15 19:21:48.008515] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:07.712 [2024-07-15 19:21:48.008604] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.712 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.712 [2024-07-15 19:21:48.072844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.024 [2024-07-15 19:21:48.185857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.024 [2024-07-15 19:21:48.185925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.024 [2024-07-15 19:21:48.185949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.024 [2024-07-15 19:21:48.185962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.024 [2024-07-15 19:21:48.185972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.024 [2024-07-15 19:21:48.186030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.024 [2024-07-15 19:21:48.186090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.024 [2024-07-15 19:21:48.186155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.024 [2024-07-15 19:21:48.186158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:27:08.024 19:21:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:08.025 19:21:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:08.025 19:21:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.025 19:21:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:08.025 ************************************ 00:27:08.025 START TEST spdk_target_abort 00:27:08.025 ************************************ 00:27:08.025 19:21:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:08.025 19:21:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:08.025 19:21:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:27:08.025 19:21:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.025 19:21:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.298 spdk_targetn1 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.298 [2024-07-15 19:21:51.218000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:11.298 [2024-07-15 19:21:51.250248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:11.298 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.299 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:11.299 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.299 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.299 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:11.299 19:21:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.299 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.577 Initializing NVMe Controllers 00:27:14.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:14.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:14.577 Initialization complete. Launching workers. 00:27:14.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10419, failed: 0 00:27:14.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1272, failed to submit 9147 00:27:14.577 success 833, unsuccess 439, failed 0 00:27:14.577 19:21:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:14.577 19:21:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:14.577 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.896 Initializing NVMe Controllers 00:27:17.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:17.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:17.896 Initialization complete. Launching workers. 00:27:17.896 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8689, failed: 0 00:27:17.896 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7441 00:27:17.896 success 279, unsuccess 969, failed 0 00:27:17.896 19:21:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:17.896 19:21:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:17.896 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.183 Initializing NVMe Controllers 00:27:21.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:21.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:21.183 Initialization complete. Launching workers. 00:27:21.183 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29792, failed: 0 00:27:21.183 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2745, failed to submit 27047 00:27:21.183 success 493, unsuccess 2252, failed 0 00:27:21.183 19:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:21.183 19:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.183 19:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:21.183 19:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.183 19:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:21.183 19:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.183 19:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3433336 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3433336 ']' 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3433336 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3433336 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3433336' 00:27:22.119 killing process with pid 3433336 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3433336 00:27:22.119 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3433336 00:27:22.380 00:27:22.380 real 0m14.314s 00:27:22.380 user 0m53.899s 00:27:22.380 sys 0m2.765s 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:22.380 ************************************ 00:27:22.380 END TEST spdk_target_abort 00:27:22.380 ************************************ 00:27:22.380 19:22:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:22.380 19:22:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:22.380 19:22:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:22.380 19:22:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.380 19:22:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:22.380 ************************************ 00:27:22.380 START TEST kernel_target_abort 00:27:22.380 ************************************ 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:22.380 19:22:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:23.756 Waiting for block devices as requested 00:27:23.756 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:23.756 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:23.756 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.014 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.014 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.014 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.014 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.014 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.273 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:24.273 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:24.273 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.533 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.533 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.533 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.533 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.791 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.791 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:24.791 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:25.049 No valid GPT data, bailing 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:25.049 00:27:25.049 Discovery Log Number of Records 2, Generation counter 2 00:27:25.049 =====Discovery Log Entry 0====== 00:27:25.049 trtype: tcp 00:27:25.049 adrfam: ipv4 00:27:25.049 subtype: current discovery subsystem 00:27:25.049 treq: not specified, sq flow control disable supported 00:27:25.049 portid: 1 00:27:25.049 trsvcid: 4420 00:27:25.049 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:25.049 traddr: 10.0.0.1 00:27:25.049 eflags: none 00:27:25.049 sectype: none 00:27:25.049 =====Discovery Log Entry 1====== 00:27:25.049 trtype: tcp 00:27:25.049 adrfam: ipv4 00:27:25.049 subtype: nvme subsystem 00:27:25.049 treq: not specified, sq flow control disable supported 00:27:25.049 portid: 1 00:27:25.049 trsvcid: 4420 00:27:25.049 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:25.049 traddr: 10.0.0.1 00:27:25.049 eflags: none 00:27:25.049 sectype: none 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:25.049 19:22:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:25.049 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.332 Initializing NVMe Controllers 00:27:28.332 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:28.332 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:28.332 Initialization complete. Launching workers. 00:27:28.332 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29182, failed: 0 00:27:28.332 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29182, failed to submit 0 00:27:28.332 success 0, unsuccess 29182, failed 0 00:27:28.332 19:22:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:28.332 19:22:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:28.332 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.619 Initializing NVMe Controllers 00:27:31.619 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:31.619 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:31.619 Initialization complete. Launching workers. 00:27:31.619 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62327, failed: 0 00:27:31.619 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15730, failed to submit 46597 00:27:31.619 success 0, unsuccess 15730, failed 0 00:27:31.619 19:22:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:31.619 19:22:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:31.619 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.904 Initializing NVMe Controllers 00:27:34.904 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:34.904 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:34.904 Initialization complete. Launching workers. 00:27:34.904 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58077, failed: 0 00:27:34.904 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14498, failed to submit 43579 00:27:34.904 success 0, unsuccess 14498, failed 0 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:34.904 19:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:35.533 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:35.533 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:35.533 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:35.533 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:35.533 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:35.533 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:35.533 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:35.533 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:35.533 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:36.530 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:36.530 00:27:36.530 real 0m14.218s 00:27:36.530 user 0m4.756s 00:27:36.530 sys 0m3.412s 00:27:36.788 19:22:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:36.788 19:22:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:36.788 ************************************ 00:27:36.788 END TEST kernel_target_abort 00:27:36.788 ************************************ 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.788 19:22:16 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.788 rmmod nvme_tcp 00:27:36.788 rmmod nvme_fabrics 00:27:36.788 rmmod nvme_keyring 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3433336 ']' 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3433336 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3433336 ']' 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3433336 00:27:36.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3433336) - No such process 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3433336 is not found' 00:27:36.788 Process with pid 3433336 is not found 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:36.788 19:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:37.723 Waiting for block devices as requested 00:27:37.723 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:37.982 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:37.982 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:37.982 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:38.241 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:38.241 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:38.241 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:38.241 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:38.501 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:38.501 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:38.501 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:38.501 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:38.761 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:38.761 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:38.761 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:38.761 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:39.021 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:39.021 19:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:39.021 19:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:39.021 19:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:39.021 19:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:39.021 19:22:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.021 19:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:39.021 19:22:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.560 19:22:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:41.560 00:27:41.560 real 0m37.755s 00:27:41.560 user 1m0.643s 00:27:41.560 sys 0m9.377s 00:27:41.560 19:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.560 19:22:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:41.560 ************************************ 00:27:41.560 END TEST nvmf_abort_qd_sizes 00:27:41.560 ************************************ 00:27:41.560 19:22:21 -- common/autotest_common.sh@1142 -- # return 0 00:27:41.560 19:22:21 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:41.560 19:22:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:41.560 19:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.560 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:27:41.560 ************************************ 00:27:41.560 START TEST keyring_file 00:27:41.560 ************************************ 00:27:41.560 19:22:21 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:41.560 * Looking for test storage... 00:27:41.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:41.560 19:22:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:41.560 19:22:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.560 19:22:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.560 19:22:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.560 19:22:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.560 19:22:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.560 19:22:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.560 19:22:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.560 19:22:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:41.560 19:22:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.560 19:22:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vRzuzyPCaV 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vRzuzyPCaV 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vRzuzyPCaV 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vRzuzyPCaV 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DaVyOCioad 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:41.561 19:22:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DaVyOCioad 00:27:41.561 19:22:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DaVyOCioad 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DaVyOCioad 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=3439099 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:41.561 19:22:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3439099 00:27:41.561 19:22:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3439099 ']' 00:27:41.561 19:22:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.561 19:22:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.561 19:22:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.561 19:22:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.561 19:22:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:41.561 [2024-07-15 19:22:21.648743] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:41.561 [2024-07-15 19:22:21.648823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439099 ] 00:27:41.561 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.561 [2024-07-15 19:22:21.708954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.561 [2024-07-15 19:22:21.824257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:42.500 19:22:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:42.500 [2024-07-15 19:22:22.590626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.500 null0 00:27:42.500 [2024-07-15 19:22:22.622678] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:42.500 [2024-07-15 19:22:22.623123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:42.500 [2024-07-15 19:22:22.630691] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.500 19:22:22 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:42.500 [2024-07-15 19:22:22.642716] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:42.500 request: 00:27:42.500 { 00:27:42.500 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.500 "secure_channel": false, 00:27:42.500 "listen_address": { 00:27:42.500 "trtype": "tcp", 00:27:42.500 "traddr": "127.0.0.1", 00:27:42.500 "trsvcid": "4420" 00:27:42.500 }, 00:27:42.500 "method": "nvmf_subsystem_add_listener", 00:27:42.500 "req_id": 1 00:27:42.500 } 00:27:42.500 Got JSON-RPC error response 00:27:42.500 response: 00:27:42.500 { 00:27:42.500 "code": -32602, 00:27:42.500 "message": "Invalid parameters" 00:27:42.500 } 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:42.500 19:22:22 keyring_file -- keyring/file.sh@46 -- # bperfpid=3439240 00:27:42.500 19:22:22 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3439240 /var/tmp/bperf.sock 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3439240 ']' 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:42.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:42.500 19:22:22 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.500 19:22:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:42.500 [2024-07-15 19:22:22.692057] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:42.500 [2024-07-15 19:22:22.692129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439240 ] 00:27:42.500 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.500 [2024-07-15 19:22:22.750290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.500 [2024-07-15 19:22:22.861361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.758 19:22:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.758 19:22:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:42.758 19:22:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:42.759 19:22:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:43.020 19:22:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DaVyOCioad 00:27:43.020 19:22:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DaVyOCioad 00:27:43.283 19:22:23 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:43.283 19:22:23 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:43.283 19:22:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.283 19:22:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.283 19:22:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:43.541 19:22:23 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.vRzuzyPCaV == \/\t\m\p\/\t\m\p\.\v\R\z\u\z\y\P\C\a\V ]] 00:27:43.541 19:22:23 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:43.541 19:22:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:43.541 19:22:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DaVyOCioad == \/\t\m\p\/\t\m\p\.\D\a\V\y\O\C\i\o\a\d ]] 00:27:43.541 19:22:23 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:43.541 19:22:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.799 19:22:24 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:43.799 19:22:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:43.799 19:22:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:43.799 19:22:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:43.799 19:22:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.799 19:22:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.799 19:22:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:44.057 19:22:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:44.057 19:22:24 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:44.057 19:22:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:44.316 [2024-07-15 19:22:24.671683] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:44.316 nvme0n1 00:27:44.575 19:22:24 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:44.575 19:22:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:44.575 19:22:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:44.575 19:22:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:44.575 19:22:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.575 19:22:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:44.833 19:22:25 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:44.833 19:22:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:44.833 19:22:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:44.833 19:22:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:44.833 19:22:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:44.833 19:22:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.833 19:22:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:44.833 19:22:25 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:44.834 19:22:25 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.092 Running I/O for 1 seconds... 00:27:46.041 00:27:46.041 Latency(us) 00:27:46.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.041 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:46.041 nvme0n1 : 1.03 4305.56 16.82 0.00 0.00 29357.62 5437.06 35340.89 00:27:46.041 =================================================================================================================== 00:27:46.041 Total : 4305.56 16.82 0.00 0.00 29357.62 5437.06 35340.89 00:27:46.041 0 00:27:46.041 19:22:26 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:46.041 19:22:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:46.299 19:22:26 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:46.299 19:22:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:46.299 19:22:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:46.299 19:22:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.299 19:22:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.299 19:22:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:46.556 19:22:26 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:46.556 19:22:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:46.556 19:22:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:46.556 19:22:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:46.556 19:22:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:46.556 19:22:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.556 19:22:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:46.814 19:22:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:46.814 19:22:27 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:46.814 19:22:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:46.814 19:22:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:46.814 19:22:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:46.814 19:22:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.814 19:22:27 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:46.814 19:22:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.814 19:22:27 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:46.814 19:22:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:47.071 [2024-07-15 19:22:27.405641] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:47.071 [2024-07-15 19:22:27.405672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e9a0 (107): Transport endpoint is not connected 00:27:47.071 [2024-07-15 19:22:27.406669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e9a0 (9): Bad file descriptor 00:27:47.071 [2024-07-15 19:22:27.407666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:47.071 [2024-07-15 19:22:27.407690] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:47.071 [2024-07-15 19:22:27.407706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:47.071 request: 00:27:47.071 { 00:27:47.071 "name": "nvme0", 00:27:47.071 "trtype": "tcp", 00:27:47.071 "traddr": "127.0.0.1", 00:27:47.071 "adrfam": "ipv4", 00:27:47.071 "trsvcid": "4420", 00:27:47.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.071 "prchk_reftag": false, 00:27:47.071 "prchk_guard": false, 00:27:47.071 "hdgst": false, 00:27:47.071 "ddgst": false, 00:27:47.071 "psk": "key1", 00:27:47.071 "method": "bdev_nvme_attach_controller", 00:27:47.071 "req_id": 1 00:27:47.071 } 00:27:47.071 Got JSON-RPC error response 00:27:47.071 response: 00:27:47.071 { 00:27:47.071 "code": -5, 00:27:47.071 "message": "Input/output error" 00:27:47.071 } 00:27:47.071 19:22:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:47.071 19:22:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:47.071 19:22:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:47.071 19:22:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:47.071 19:22:27 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:47.071 19:22:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:47.071 19:22:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:47.071 19:22:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.071 19:22:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.071 19:22:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:47.329 19:22:27 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:47.329 19:22:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:47.329 19:22:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:47.329 19:22:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:47.329 19:22:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.329 19:22:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:47.329 19:22:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.586 19:22:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:47.586 19:22:27 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:47.586 19:22:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:47.844 19:22:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:47.844 19:22:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:48.101 19:22:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:48.101 19:22:28 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:48.101 19:22:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:48.359 19:22:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:48.359 19:22:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.vRzuzyPCaV 00:27:48.359 19:22:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:48.359 19:22:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:48.359 19:22:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:48.359 19:22:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:48.359 19:22:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.359 19:22:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:48.359 19:22:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:48.359 19:22:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:48.359 19:22:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:48.616 [2024-07-15 19:22:28.907597] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vRzuzyPCaV': 0100660 00:27:48.616 [2024-07-15 19:22:28.907637] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:48.616 request: 00:27:48.616 { 00:27:48.616 "name": "key0", 00:27:48.616 "path": "/tmp/tmp.vRzuzyPCaV", 00:27:48.616 "method": "keyring_file_add_key", 00:27:48.616 "req_id": 1 00:27:48.616 } 00:27:48.616 Got JSON-RPC error response 00:27:48.616 response: 00:27:48.616 { 00:27:48.616 "code": -1, 00:27:48.616 "message": "Operation not permitted" 00:27:48.616 } 00:27:48.616 19:22:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:48.616 19:22:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:48.616 19:22:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:48.616 19:22:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:48.616 19:22:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.vRzuzyPCaV 00:27:48.616 19:22:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:48.616 19:22:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vRzuzyPCaV 00:27:48.874 19:22:29 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.vRzuzyPCaV 00:27:48.874 19:22:29 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:48.874 19:22:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:48.874 19:22:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:48.874 19:22:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:48.874 19:22:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:48.874 19:22:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:49.131 19:22:29 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:49.131 19:22:29 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:49.131 19:22:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:49.131 19:22:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:49.131 19:22:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:49.132 19:22:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.132 19:22:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:49.132 19:22:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:49.132 19:22:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:49.132 19:22:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:49.389 [2024-07-15 19:22:29.649671] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vRzuzyPCaV': No such file or directory 00:27:49.389 [2024-07-15 19:22:29.649710] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:49.389 [2024-07-15 19:22:29.649741] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:49.389 [2024-07-15 19:22:29.649754] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:49.389 [2024-07-15 19:22:29.649781] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:49.389 request: 00:27:49.389 { 00:27:49.389 "name": "nvme0", 00:27:49.389 "trtype": "tcp", 00:27:49.389 "traddr": "127.0.0.1", 00:27:49.389 "adrfam": "ipv4", 00:27:49.389 "trsvcid": "4420", 00:27:49.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:49.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:49.389 "prchk_reftag": false, 00:27:49.389 "prchk_guard": false, 00:27:49.389 "hdgst": false, 00:27:49.389 "ddgst": false, 00:27:49.389 "psk": "key0", 00:27:49.389 "method": "bdev_nvme_attach_controller", 00:27:49.389 "req_id": 1 00:27:49.389 } 00:27:49.389 Got JSON-RPC error response 00:27:49.389 response: 00:27:49.389 { 00:27:49.389 "code": -19, 00:27:49.389 "message": "No such device" 00:27:49.389 } 00:27:49.389 19:22:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:49.389 19:22:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:49.389 19:22:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:49.389 19:22:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:49.389 19:22:29 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:49.389 19:22:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:49.657 19:22:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:49.657 19:22:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:49.657 19:22:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:49.657 19:22:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:49.657 19:22:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:49.657 19:22:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:49.657 19:22:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vsguqFQzpN 00:27:49.657 19:22:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:49.657 19:22:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:49.657 19:22:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:49.657 19:22:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:49.657 19:22:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:49.657 19:22:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:49.658 19:22:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:49.658 19:22:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vsguqFQzpN 00:27:49.658 19:22:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vsguqFQzpN 00:27:49.658 19:22:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.vsguqFQzpN 00:27:49.658 19:22:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vsguqFQzpN 00:27:49.658 19:22:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vsguqFQzpN 00:27:49.922 19:22:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:49.923 19:22:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:50.180 nvme0n1 00:27:50.180 19:22:30 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:50.180 19:22:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:50.180 19:22:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:50.180 19:22:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:50.180 19:22:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.180 19:22:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:50.437 19:22:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:50.437 19:22:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:50.437 19:22:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:50.694 19:22:31 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:50.694 19:22:31 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:50.694 19:22:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:50.694 19:22:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.694 19:22:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:50.951 19:22:31 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:50.951 19:22:31 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:50.951 19:22:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:50.951 19:22:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:50.951 19:22:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:50.951 19:22:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.951 19:22:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:51.208 19:22:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:51.208 19:22:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:51.208 19:22:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:51.465 19:22:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:51.465 19:22:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:51.465 19:22:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:51.721 19:22:32 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:51.721 19:22:32 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vsguqFQzpN 00:27:51.721 19:22:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vsguqFQzpN 00:27:51.977 19:22:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DaVyOCioad 00:27:51.977 19:22:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DaVyOCioad 00:27:52.234 19:22:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:52.234 19:22:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:52.491 nvme0n1 00:27:52.491 19:22:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:52.491 19:22:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:52.748 19:22:33 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:52.748 "subsystems": [ 00:27:52.748 { 00:27:52.748 "subsystem": "keyring", 00:27:52.748 "config": [ 00:27:52.748 { 00:27:52.748 "method": "keyring_file_add_key", 00:27:52.748 "params": { 00:27:52.748 "name": "key0", 00:27:52.748 "path": "/tmp/tmp.vsguqFQzpN" 00:27:52.748 } 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "method": "keyring_file_add_key", 00:27:52.748 "params": { 00:27:52.748 "name": "key1", 00:27:52.748 "path": "/tmp/tmp.DaVyOCioad" 00:27:52.748 } 00:27:52.748 } 00:27:52.748 ] 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "subsystem": "iobuf", 00:27:52.748 "config": [ 00:27:52.748 { 00:27:52.748 "method": "iobuf_set_options", 00:27:52.748 "params": { 00:27:52.748 "small_pool_count": 8192, 00:27:52.748 "large_pool_count": 1024, 00:27:52.748 "small_bufsize": 8192, 00:27:52.748 "large_bufsize": 135168 00:27:52.748 } 00:27:52.748 } 00:27:52.748 ] 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "subsystem": "sock", 00:27:52.748 "config": [ 00:27:52.748 { 00:27:52.748 "method": "sock_set_default_impl", 00:27:52.748 "params": { 00:27:52.748 "impl_name": "posix" 00:27:52.748 } 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "method": "sock_impl_set_options", 00:27:52.748 "params": { 00:27:52.748 "impl_name": "ssl", 00:27:52.748 "recv_buf_size": 4096, 00:27:52.748 "send_buf_size": 4096, 00:27:52.748 "enable_recv_pipe": true, 00:27:52.748 "enable_quickack": false, 00:27:52.748 "enable_placement_id": 0, 00:27:52.748 "enable_zerocopy_send_server": true, 00:27:52.748 "enable_zerocopy_send_client": false, 00:27:52.748 "zerocopy_threshold": 0, 00:27:52.748 "tls_version": 0, 00:27:52.748 "enable_ktls": false 00:27:52.748 } 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "method": "sock_impl_set_options", 00:27:52.748 "params": { 00:27:52.748 "impl_name": "posix", 00:27:52.748 "recv_buf_size": 2097152, 00:27:52.748 "send_buf_size": 2097152, 00:27:52.748 "enable_recv_pipe": true, 00:27:52.748 "enable_quickack": false, 00:27:52.748 "enable_placement_id": 0, 00:27:52.748 "enable_zerocopy_send_server": true, 00:27:52.748 "enable_zerocopy_send_client": false, 00:27:52.748 "zerocopy_threshold": 0, 00:27:52.748 "tls_version": 0, 00:27:52.748 "enable_ktls": false 00:27:52.748 } 00:27:52.748 } 00:27:52.748 ] 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "subsystem": "vmd", 00:27:52.748 "config": [] 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "subsystem": "accel", 00:27:52.748 "config": [ 00:27:52.748 { 00:27:52.748 "method": "accel_set_options", 00:27:52.748 "params": { 00:27:52.748 "small_cache_size": 128, 00:27:52.748 "large_cache_size": 16, 00:27:52.748 "task_count": 2048, 00:27:52.748 "sequence_count": 2048, 00:27:52.748 "buf_count": 2048 00:27:52.748 } 00:27:52.748 } 00:27:52.748 ] 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "subsystem": "bdev", 00:27:52.748 "config": [ 00:27:52.748 { 00:27:52.748 "method": "bdev_set_options", 00:27:52.748 "params": { 00:27:52.748 "bdev_io_pool_size": 65535, 00:27:52.748 "bdev_io_cache_size": 256, 00:27:52.748 "bdev_auto_examine": true, 00:27:52.748 "iobuf_small_cache_size": 128, 00:27:52.748 "iobuf_large_cache_size": 16 00:27:52.748 } 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "method": "bdev_raid_set_options", 00:27:52.748 "params": { 00:27:52.748 "process_window_size_kb": 1024 00:27:52.748 } 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "method": "bdev_iscsi_set_options", 00:27:52.748 "params": { 00:27:52.748 "timeout_sec": 30 00:27:52.748 } 00:27:52.748 }, 00:27:52.748 { 00:27:52.748 "method": "bdev_nvme_set_options", 00:27:52.748 "params": { 00:27:52.748 "action_on_timeout": "none", 00:27:52.748 "timeout_us": 0, 00:27:52.748 "timeout_admin_us": 0, 00:27:52.748 "keep_alive_timeout_ms": 10000, 00:27:52.748 "arbitration_burst": 0, 00:27:52.748 "low_priority_weight": 0, 00:27:52.748 "medium_priority_weight": 0, 00:27:52.748 "high_priority_weight": 0, 00:27:52.748 "nvme_adminq_poll_period_us": 10000, 00:27:52.748 "nvme_ioq_poll_period_us": 0, 00:27:52.748 "io_queue_requests": 512, 00:27:52.748 "delay_cmd_submit": true, 00:27:52.748 "transport_retry_count": 4, 00:27:52.748 "bdev_retry_count": 3, 00:27:52.748 "transport_ack_timeout": 0, 00:27:52.748 "ctrlr_loss_timeout_sec": 0, 00:27:52.748 "reconnect_delay_sec": 0, 00:27:52.748 "fast_io_fail_timeout_sec": 0, 00:27:52.748 "disable_auto_failback": false, 00:27:52.748 "generate_uuids": false, 00:27:52.748 "transport_tos": 0, 00:27:52.748 "nvme_error_stat": false, 00:27:52.748 "rdma_srq_size": 0, 00:27:52.748 "io_path_stat": false, 00:27:52.748 "allow_accel_sequence": false, 00:27:52.748 "rdma_max_cq_size": 0, 00:27:52.748 "rdma_cm_event_timeout_ms": 0, 00:27:52.748 "dhchap_digests": [ 00:27:52.748 "sha256", 00:27:52.748 "sha384", 00:27:52.748 "sha512" 00:27:52.748 ], 00:27:52.748 "dhchap_dhgroups": [ 00:27:52.748 "null", 00:27:52.748 "ffdhe2048", 00:27:52.748 "ffdhe3072", 00:27:52.748 "ffdhe4096", 00:27:52.748 "ffdhe6144", 00:27:52.749 "ffdhe8192" 00:27:52.749 ] 00:27:52.749 } 00:27:52.749 }, 00:27:52.749 { 00:27:52.749 "method": "bdev_nvme_attach_controller", 00:27:52.749 "params": { 00:27:52.749 "name": "nvme0", 00:27:52.749 "trtype": "TCP", 00:27:52.749 "adrfam": "IPv4", 00:27:52.749 "traddr": "127.0.0.1", 00:27:52.749 "trsvcid": "4420", 00:27:52.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.749 "prchk_reftag": false, 00:27:52.749 "prchk_guard": false, 00:27:52.749 "ctrlr_loss_timeout_sec": 0, 00:27:52.749 "reconnect_delay_sec": 0, 00:27:52.749 "fast_io_fail_timeout_sec": 0, 00:27:52.749 "psk": "key0", 00:27:52.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.749 "hdgst": false, 00:27:52.749 "ddgst": false 00:27:52.749 } 00:27:52.749 }, 00:27:52.749 { 00:27:52.749 "method": "bdev_nvme_set_hotplug", 00:27:52.749 "params": { 00:27:52.749 "period_us": 100000, 00:27:52.749 "enable": false 00:27:52.749 } 00:27:52.749 }, 00:27:52.749 { 00:27:52.749 "method": "bdev_wait_for_examine" 00:27:52.749 } 00:27:52.749 ] 00:27:52.749 }, 00:27:52.749 { 00:27:52.749 "subsystem": "nbd", 00:27:52.749 "config": [] 00:27:52.749 } 00:27:52.749 ] 00:27:52.749 }' 00:27:52.749 19:22:33 keyring_file -- keyring/file.sh@114 -- # killprocess 3439240 00:27:52.749 19:22:33 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3439240 ']' 00:27:52.749 19:22:33 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3439240 00:27:52.749 19:22:33 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:52.749 19:22:33 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.749 19:22:33 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439240 00:27:53.008 19:22:33 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:53.008 19:22:33 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:53.008 19:22:33 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439240' 00:27:53.008 killing process with pid 3439240 00:27:53.008 19:22:33 keyring_file -- common/autotest_common.sh@967 -- # kill 3439240 00:27:53.008 Received shutdown signal, test time was about 1.000000 seconds 00:27:53.008 00:27:53.008 Latency(us) 00:27:53.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.008 =================================================================================================================== 00:27:53.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:53.008 19:22:33 keyring_file -- common/autotest_common.sh@972 -- # wait 3439240 00:27:53.267 19:22:33 keyring_file -- keyring/file.sh@117 -- # bperfpid=3440578 00:27:53.267 19:22:33 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3440578 /var/tmp/bperf.sock 00:27:53.267 19:22:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3440578 ']' 00:27:53.267 19:22:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:53.267 19:22:33 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:53.267 19:22:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.267 19:22:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:53.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:53.267 19:22:33 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:53.267 "subsystems": [ 00:27:53.267 { 00:27:53.267 "subsystem": "keyring", 00:27:53.267 "config": [ 00:27:53.267 { 00:27:53.267 "method": "keyring_file_add_key", 00:27:53.267 "params": { 00:27:53.267 "name": "key0", 00:27:53.267 "path": "/tmp/tmp.vsguqFQzpN" 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "keyring_file_add_key", 00:27:53.267 "params": { 00:27:53.267 "name": "key1", 00:27:53.267 "path": "/tmp/tmp.DaVyOCioad" 00:27:53.267 } 00:27:53.267 } 00:27:53.267 ] 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "subsystem": "iobuf", 00:27:53.267 "config": [ 00:27:53.267 { 00:27:53.267 "method": "iobuf_set_options", 00:27:53.267 "params": { 00:27:53.267 "small_pool_count": 8192, 00:27:53.267 "large_pool_count": 1024, 00:27:53.267 "small_bufsize": 8192, 00:27:53.267 "large_bufsize": 135168 00:27:53.267 } 00:27:53.267 } 00:27:53.267 ] 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "subsystem": "sock", 00:27:53.267 "config": [ 00:27:53.267 { 00:27:53.267 "method": "sock_set_default_impl", 00:27:53.267 "params": { 00:27:53.267 "impl_name": "posix" 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "sock_impl_set_options", 00:27:53.267 "params": { 00:27:53.267 "impl_name": "ssl", 00:27:53.267 "recv_buf_size": 4096, 00:27:53.267 "send_buf_size": 4096, 00:27:53.267 "enable_recv_pipe": true, 00:27:53.267 "enable_quickack": false, 00:27:53.267 "enable_placement_id": 0, 00:27:53.267 "enable_zerocopy_send_server": true, 00:27:53.267 "enable_zerocopy_send_client": false, 00:27:53.267 "zerocopy_threshold": 0, 00:27:53.267 "tls_version": 0, 00:27:53.267 "enable_ktls": false 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "sock_impl_set_options", 00:27:53.267 "params": { 00:27:53.267 "impl_name": "posix", 00:27:53.267 "recv_buf_size": 2097152, 00:27:53.267 "send_buf_size": 2097152, 00:27:53.267 "enable_recv_pipe": true, 00:27:53.267 "enable_quickack": false, 00:27:53.267 "enable_placement_id": 0, 00:27:53.267 "enable_zerocopy_send_server": true, 00:27:53.267 "enable_zerocopy_send_client": false, 00:27:53.267 "zerocopy_threshold": 0, 00:27:53.267 "tls_version": 0, 00:27:53.267 "enable_ktls": false 00:27:53.267 } 00:27:53.267 } 00:27:53.267 ] 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "subsystem": "vmd", 00:27:53.267 "config": [] 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "subsystem": "accel", 00:27:53.267 "config": [ 00:27:53.267 { 00:27:53.267 "method": "accel_set_options", 00:27:53.267 "params": { 00:27:53.267 "small_cache_size": 128, 00:27:53.267 "large_cache_size": 16, 00:27:53.267 "task_count": 2048, 00:27:53.267 "sequence_count": 2048, 00:27:53.267 "buf_count": 2048 00:27:53.267 } 00:27:53.267 } 00:27:53.267 ] 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "subsystem": "bdev", 00:27:53.267 "config": [ 00:27:53.267 { 00:27:53.267 "method": "bdev_set_options", 00:27:53.267 "params": { 00:27:53.267 "bdev_io_pool_size": 65535, 00:27:53.267 "bdev_io_cache_size": 256, 00:27:53.267 "bdev_auto_examine": true, 00:27:53.267 "iobuf_small_cache_size": 128, 00:27:53.267 "iobuf_large_cache_size": 16 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "bdev_raid_set_options", 00:27:53.267 "params": { 00:27:53.267 "process_window_size_kb": 1024 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "bdev_iscsi_set_options", 00:27:53.267 "params": { 00:27:53.267 "timeout_sec": 30 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "bdev_nvme_set_options", 00:27:53.267 "params": { 00:27:53.267 "action_on_timeout": "none", 00:27:53.267 "timeout_us": 0, 00:27:53.267 "timeout_admin_us": 0, 00:27:53.267 "keep_alive_timeout_ms": 10000, 00:27:53.267 "arbitration_burst": 0, 00:27:53.267 "low_priority_weight": 0, 00:27:53.267 "medium_priority_weight": 0, 00:27:53.267 "high_priority_weight": 0, 00:27:53.267 "nvme_adminq_poll_period_us": 10000, 00:27:53.267 "nvme_ioq_poll_period_us": 0, 00:27:53.267 "io_queue_requests": 512, 00:27:53.267 "delay_cmd_submit": true, 00:27:53.267 "transport_retry_count": 4, 00:27:53.267 "bdev_retry_count": 3, 00:27:53.267 "transport_ack_timeout": 0, 00:27:53.267 "ctrlr_loss_timeout_sec": 0, 00:27:53.267 "reconnect_delay_sec": 0, 00:27:53.267 "fast_io_fail_timeout_sec": 0, 00:27:53.267 "disable_auto_failback": false, 00:27:53.267 "generate_uuids": false, 00:27:53.267 "transport_tos": 0, 00:27:53.267 "nvme_error_stat": false, 00:27:53.267 "rdma_srq_size": 0, 00:27:53.267 "io_path_stat": false, 00:27:53.267 "allow_accel_sequence": false, 00:27:53.267 "rdma_max_cq_size": 0, 00:27:53.267 "rdma_cm_event_timeout_ms": 0, 00:27:53.267 "dhchap_digests": [ 00:27:53.267 "sha256", 00:27:53.267 "sha384", 00:27:53.267 "sha512" 00:27:53.267 ], 00:27:53.267 "dhchap_dhgroups": [ 00:27:53.267 "null", 00:27:53.267 "ffdhe2048", 00:27:53.267 "ffdhe3072", 00:27:53.267 "ffdhe4096", 00:27:53.267 "ffdhe6144", 00:27:53.267 "ffdhe8192" 00:27:53.267 ] 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "bdev_nvme_attach_controller", 00:27:53.267 "params": { 00:27:53.267 "name": "nvme0", 00:27:53.267 "trtype": "TCP", 00:27:53.267 "adrfam": "IPv4", 00:27:53.267 "traddr": "127.0.0.1", 00:27:53.267 "trsvcid": "4420", 00:27:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.267 "prchk_reftag": false, 00:27:53.267 "prchk_guard": false, 00:27:53.267 "ctrlr_loss_timeout_sec": 0, 00:27:53.267 "reconnect_delay_sec": 0, 00:27:53.267 "fast_io_fail_timeout_sec": 0, 00:27:53.267 "psk": "key0", 00:27:53.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.267 "hdgst": false, 00:27:53.267 "ddgst": false 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "bdev_nvme_set_hotplug", 00:27:53.267 "params": { 00:27:53.267 "period_us": 100000, 00:27:53.267 "enable": false 00:27:53.267 } 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "method": "bdev_wait_for_examine" 00:27:53.267 } 00:27:53.267 ] 00:27:53.267 }, 00:27:53.267 { 00:27:53.267 "subsystem": "nbd", 00:27:53.267 "config": [] 00:27:53.267 } 00:27:53.267 ] 00:27:53.267 }' 00:27:53.267 19:22:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.267 19:22:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:53.267 [2024-07-15 19:22:33.491949] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:53.267 [2024-07-15 19:22:33.492029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440578 ] 00:27:53.267 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.267 [2024-07-15 19:22:33.552436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.267 [2024-07-15 19:22:33.665463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.525 [2024-07-15 19:22:33.858644] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:54.090 19:22:34 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.090 19:22:34 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:54.090 19:22:34 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:54.090 19:22:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:54.090 19:22:34 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:54.349 19:22:34 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:54.349 19:22:34 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:54.349 19:22:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:54.349 19:22:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:54.349 19:22:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:54.349 19:22:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:54.349 19:22:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:54.606 19:22:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:54.606 19:22:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:54.606 19:22:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:54.606 19:22:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:54.606 19:22:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:54.606 19:22:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:54.606 19:22:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:54.864 19:22:35 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:54.864 19:22:35 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:54.864 19:22:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:54.864 19:22:35 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:55.124 19:22:35 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:55.124 19:22:35 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:55.124 19:22:35 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.vsguqFQzpN /tmp/tmp.DaVyOCioad 00:27:55.124 19:22:35 keyring_file -- keyring/file.sh@20 -- # killprocess 3440578 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3440578 ']' 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3440578 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3440578 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3440578' 00:27:55.124 killing process with pid 3440578 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@967 -- # kill 3440578 00:27:55.124 Received shutdown signal, test time was about 1.000000 seconds 00:27:55.124 00:27:55.124 Latency(us) 00:27:55.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.124 =================================================================================================================== 00:27:55.124 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:55.124 19:22:35 keyring_file -- common/autotest_common.sh@972 -- # wait 3440578 00:27:55.384 19:22:35 keyring_file -- keyring/file.sh@21 -- # killprocess 3439099 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3439099 ']' 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3439099 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439099 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439099' 00:27:55.384 killing process with pid 3439099 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@967 -- # kill 3439099 00:27:55.384 [2024-07-15 19:22:35.773383] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:55.384 19:22:35 keyring_file -- common/autotest_common.sh@972 -- # wait 3439099 00:27:55.953 00:27:55.953 real 0m14.799s 00:27:55.953 user 0m35.688s 00:27:55.953 sys 0m3.359s 00:27:55.953 19:22:36 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.953 19:22:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:55.953 ************************************ 00:27:55.953 END TEST keyring_file 00:27:55.953 ************************************ 00:27:55.953 19:22:36 -- common/autotest_common.sh@1142 -- # return 0 00:27:55.953 19:22:36 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:55.953 19:22:36 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:55.953 19:22:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:55.953 19:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.953 19:22:36 -- common/autotest_common.sh@10 -- # set +x 00:27:55.953 ************************************ 00:27:55.953 START TEST keyring_linux 00:27:55.953 ************************************ 00:27:55.953 19:22:36 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:55.953 * Looking for test storage... 00:27:55.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:55.953 19:22:36 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:55.953 19:22:36 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.953 19:22:36 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.954 19:22:36 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.954 19:22:36 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.954 19:22:36 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.954 19:22:36 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.954 19:22:36 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.954 19:22:36 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.954 19:22:36 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:55.954 19:22:36 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.954 19:22:36 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:55.954 19:22:36 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:55.954 19:22:36 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:55.954 19:22:36 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:55.954 19:22:36 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:55.954 19:22:36 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:55.954 19:22:36 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:55.954 19:22:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:55.954 19:22:36 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:55.954 19:22:36 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:55.954 19:22:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:55.954 19:22:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:55.954 19:22:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:55.954 19:22:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:56.212 /tmp/:spdk-test:key0 00:27:56.212 19:22:36 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:56.212 19:22:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:56.212 19:22:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:56.212 19:22:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:56.212 19:22:36 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:56.212 19:22:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:56.212 19:22:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:56.212 19:22:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:56.212 /tmp/:spdk-test:key1 00:27:56.212 19:22:36 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3441063 00:27:56.212 19:22:36 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:56.212 19:22:36 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3441063 00:27:56.212 19:22:36 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3441063 ']' 00:27:56.212 19:22:36 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.212 19:22:36 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.212 19:22:36 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.212 19:22:36 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.212 19:22:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:56.212 [2024-07-15 19:22:36.499436] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:56.212 [2024-07-15 19:22:36.499521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441063 ] 00:27:56.212 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.212 [2024-07-15 19:22:36.557544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.470 [2024-07-15 19:22:36.667852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:56.729 19:22:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:56.729 [2024-07-15 19:22:36.934473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.729 null0 00:27:56.729 [2024-07-15 19:22:36.966519] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:56.729 [2024-07-15 19:22:36.967017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.729 19:22:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:56.729 305098165 00:27:56.729 19:22:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:56.729 851738874 00:27:56.729 19:22:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3441075 00:27:56.729 19:22:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3441075 /var/tmp/bperf.sock 00:27:56.729 19:22:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3441075 ']' 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.729 19:22:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:56.729 [2024-07-15 19:22:37.037730] Starting SPDK v24.09-pre git sha1 a22f117fe / DPDK 24.03.0 initialization... 00:27:56.729 [2024-07-15 19:22:37.037806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441075 ] 00:27:56.729 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.729 [2024-07-15 19:22:37.095374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.987 [2024-07-15 19:22:37.207237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.987 19:22:37 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.987 19:22:37 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:56.987 19:22:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:56.987 19:22:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:57.245 19:22:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:57.245 19:22:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:57.503 19:22:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:57.503 19:22:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:57.759 [2024-07-15 19:22:38.055502] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:57.759 nvme0n1 00:27:57.759 19:22:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:57.759 19:22:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:57.759 19:22:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:57.759 19:22:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:57.759 19:22:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:57.759 19:22:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.017 19:22:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:58.017 19:22:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:58.017 19:22:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:58.017 19:22:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:58.017 19:22:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.017 19:22:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.017 19:22:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:58.275 19:22:38 keyring_linux -- keyring/linux.sh@25 -- # sn=305098165 00:27:58.275 19:22:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:58.275 19:22:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:58.275 19:22:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 305098165 == \3\0\5\0\9\8\1\6\5 ]] 00:27:58.275 19:22:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 305098165 00:27:58.275 19:22:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:58.275 19:22:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:58.534 Running I/O for 1 seconds... 00:27:59.470 00:27:59.470 Latency(us) 00:27:59.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.470 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:59.470 nvme0n1 : 1.03 4052.99 15.83 0.00 0.00 31216.35 8398.32 40195.41 00:27:59.470 =================================================================================================================== 00:27:59.470 Total : 4052.99 15.83 0.00 0.00 31216.35 8398.32 40195.41 00:27:59.470 0 00:27:59.470 19:22:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:59.470 19:22:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:59.728 19:22:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:59.728 19:22:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:59.728 19:22:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:59.728 19:22:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:59.728 19:22:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.728 19:22:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:59.986 19:22:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:59.986 19:22:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:59.986 19:22:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:59.986 19:22:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:59.986 19:22:40 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:59.986 19:22:40 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:59.986 19:22:40 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:59.986 19:22:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.986 19:22:40 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:59.986 19:22:40 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:59.986 19:22:40 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:59.986 19:22:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:00.244 [2024-07-15 19:22:40.528138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:00.244 [2024-07-15 19:22:40.528731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b23f0 (107): Transport endpoint is not connected 00:28:00.244 [2024-07-15 19:22:40.529721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b23f0 (9): Bad file descriptor 00:28:00.244 [2024-07-15 19:22:40.530719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:00.244 [2024-07-15 19:22:40.530743] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:00.244 [2024-07-15 19:22:40.530767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:00.244 request: 00:28:00.244 { 00:28:00.244 "name": "nvme0", 00:28:00.244 "trtype": "tcp", 00:28:00.244 "traddr": "127.0.0.1", 00:28:00.244 "adrfam": "ipv4", 00:28:00.244 "trsvcid": "4420", 00:28:00.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:00.244 "prchk_reftag": false, 00:28:00.244 "prchk_guard": false, 00:28:00.244 "hdgst": false, 00:28:00.244 "ddgst": false, 00:28:00.244 "psk": ":spdk-test:key1", 00:28:00.244 "method": "bdev_nvme_attach_controller", 00:28:00.244 "req_id": 1 00:28:00.244 } 00:28:00.244 Got JSON-RPC error response 00:28:00.244 response: 00:28:00.244 { 00:28:00.244 "code": -5, 00:28:00.244 "message": "Input/output error" 00:28:00.244 } 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@33 -- # sn=305098165 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 305098165 00:28:00.244 1 links removed 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@33 -- # sn=851738874 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 851738874 00:28:00.244 1 links removed 00:28:00.244 19:22:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3441075 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3441075 ']' 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3441075 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3441075 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3441075' 00:28:00.244 killing process with pid 3441075 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 3441075 00:28:00.244 Received shutdown signal, test time was about 1.000000 seconds 00:28:00.244 00:28:00.244 Latency(us) 00:28:00.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.244 =================================================================================================================== 00:28:00.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.244 19:22:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 3441075 00:28:00.502 19:22:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3441063 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3441063 ']' 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3441063 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3441063 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3441063' 00:28:00.502 killing process with pid 3441063 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 3441063 00:28:00.502 19:22:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 3441063 00:28:01.069 00:28:01.069 real 0m4.998s 00:28:01.069 user 0m9.264s 00:28:01.069 sys 0m1.486s 00:28:01.069 19:22:41 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:01.069 19:22:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:01.069 ************************************ 00:28:01.069 END TEST keyring_linux 00:28:01.069 ************************************ 00:28:01.069 19:22:41 -- common/autotest_common.sh@1142 -- # return 0 00:28:01.069 19:22:41 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:01.069 19:22:41 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:01.069 19:22:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:01.069 19:22:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:01.069 19:22:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:01.069 19:22:41 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:01.069 19:22:41 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:01.069 19:22:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.069 19:22:41 -- common/autotest_common.sh@10 -- # set +x 00:28:01.069 19:22:41 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:01.069 19:22:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:01.069 19:22:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:01.069 19:22:41 -- common/autotest_common.sh@10 -- # set +x 00:28:02.974 INFO: APP EXITING 00:28:02.974 INFO: killing all VMs 00:28:02.974 INFO: killing vhost app 00:28:02.974 INFO: EXIT DONE 00:28:03.942 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:03.942 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:03.942 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:03.942 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:03.942 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:03.942 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:03.942 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:03.942 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:03.942 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:03.942 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:03.942 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:04.200 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:04.200 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:04.200 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:04.200 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:04.200 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:04.200 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:05.576 Cleaning 00:28:05.576 Removing: /var/run/dpdk/spdk0/config 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:05.576 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:05.576 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:05.576 Removing: /var/run/dpdk/spdk1/config 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:05.576 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:05.576 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:05.576 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:05.576 Removing: /var/run/dpdk/spdk2/config 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:05.576 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:05.576 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:05.576 Removing: /var/run/dpdk/spdk3/config 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:05.576 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:05.576 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:05.576 Removing: /var/run/dpdk/spdk4/config 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:05.576 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:05.576 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:05.576 Removing: /dev/shm/bdev_svc_trace.1 00:28:05.576 Removing: /dev/shm/nvmf_trace.0 00:28:05.576 Removing: /dev/shm/spdk_tgt_trace.pid3179089 00:28:05.576 Removing: /var/run/dpdk/spdk0 00:28:05.576 Removing: /var/run/dpdk/spdk1 00:28:05.576 Removing: /var/run/dpdk/spdk2 00:28:05.576 Removing: /var/run/dpdk/spdk3 00:28:05.576 Removing: /var/run/dpdk/spdk4 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3177426 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3178158 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3179089 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3179413 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3180106 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3180246 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3180958 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3181085 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3181329 00:28:05.576 Removing: /var/run/dpdk/spdk_pid3182523 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3183569 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3183809 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3184069 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3184282 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3184516 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3184752 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3184910 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3185090 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3185404 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3187755 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3188051 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3188343 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3188351 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3188781 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3188799 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3189226 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3189304 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3189522 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3189660 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3189824 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3189962 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3190334 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3190491 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3190809 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3190980 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3191052 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3191193 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3191355 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3191627 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3191785 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3191943 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3192221 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3192378 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3192603 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3192845 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3193081 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3193359 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3193529 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3193689 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3194203 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3194620 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3194781 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3195057 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3195216 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3195383 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3195651 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3195818 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3195997 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3196209 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3198271 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3224556 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3227183 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3234781 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3238066 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3240428 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3240840 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3244806 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3248662 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3248701 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3249320 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3249979 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3250527 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3250956 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3251045 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3251182 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3251315 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3251321 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3251978 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3252522 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3253190 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3253596 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3253717 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3253864 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3254877 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3255727 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3261240 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3261611 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3264744 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3268449 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3270503 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3276886 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3282019 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3283282 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3283944 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3294286 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3296492 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3322059 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3325460 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3326641 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3327960 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3328010 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3328116 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3328257 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3328695 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3330011 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3330868 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3331186 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3332910 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3333220 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3333786 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3336304 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3342322 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3345097 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3348873 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3349936 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3351040 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3353661 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3356548 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3360719 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3360770 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3363552 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3363783 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3363942 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3364205 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3364221 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3366984 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3367428 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3370094 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3371955 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3375498 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3378809 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3385170 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3389508 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3389516 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3402333 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3402874 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3403278 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3403807 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3404398 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3404812 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3405335 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3405749 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3408241 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3408383 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3412164 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3412229 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3413951 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3418982 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3419018 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3421928 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3423326 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3424842 00:28:05.577 Removing: /var/run/dpdk/spdk_pid3426202 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3427495 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3428368 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3433725 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3434035 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3434428 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3435984 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3436377 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3436660 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3439099 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3439240 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3440578 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3441063 00:28:05.835 Removing: /var/run/dpdk/spdk_pid3441075 00:28:05.835 Clean 00:28:05.835 19:22:46 -- common/autotest_common.sh@1451 -- # return 0 00:28:05.835 19:22:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:05.835 19:22:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.835 19:22:46 -- common/autotest_common.sh@10 -- # set +x 00:28:05.835 19:22:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:05.835 19:22:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.835 19:22:46 -- common/autotest_common.sh@10 -- # set +x 00:28:05.835 19:22:46 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:05.835 19:22:46 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:05.835 19:22:46 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:05.835 19:22:46 -- spdk/autotest.sh@391 -- # hash lcov 00:28:05.835 19:22:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:05.835 19:22:46 -- spdk/autotest.sh@393 -- # hostname 00:28:05.835 19:22:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:06.093 geninfo: WARNING: invalid characters removed from testname! 00:28:38.146 19:23:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:38.146 19:23:17 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:40.051 19:23:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:43.339 19:23:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:45.877 19:23:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:49.204 19:23:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:51.739 19:23:32 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:51.999 19:23:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.999 19:23:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:51.999 19:23:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.999 19:23:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.999 19:23:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 19:23:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 19:23:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 19:23:32 -- paths/export.sh@5 -- $ export PATH 00:28:51.999 19:23:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.999 19:23:32 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:51.999 19:23:32 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:51.999 19:23:32 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721064212.XXXXXX 00:28:51.999 19:23:32 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721064212.Id3z4D 00:28:51.999 19:23:32 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:51.999 19:23:32 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:51.999 19:23:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:51.999 19:23:32 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:51.999 19:23:32 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:51.999 19:23:32 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:51.999 19:23:32 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:51.999 19:23:32 -- common/autotest_common.sh@10 -- $ set +x 00:28:51.999 19:23:32 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:52.000 19:23:32 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:52.000 19:23:32 -- pm/common@17 -- $ local monitor 00:28:52.000 19:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.000 19:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.000 19:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.000 19:23:32 -- pm/common@21 -- $ date +%s 00:28:52.000 19:23:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.000 19:23:32 -- pm/common@21 -- $ date +%s 00:28:52.000 19:23:32 -- pm/common@25 -- $ sleep 1 00:28:52.000 19:23:32 -- pm/common@21 -- $ date +%s 00:28:52.000 19:23:32 -- pm/common@21 -- $ date +%s 00:28:52.000 19:23:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721064212 00:28:52.000 19:23:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721064212 00:28:52.000 19:23:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721064212 00:28:52.000 19:23:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721064212 00:28:52.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721064212_collect-vmstat.pm.log 00:28:52.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721064212_collect-cpu-load.pm.log 00:28:52.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721064212_collect-cpu-temp.pm.log 00:28:52.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721064212_collect-bmc-pm.bmc.pm.log 00:28:52.939 19:23:33 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:52.939 19:23:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:52.939 19:23:33 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:52.939 19:23:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:52.939 19:23:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:52.939 19:23:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:52.939 19:23:33 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:52.939 19:23:33 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:52.939 19:23:33 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:52.939 19:23:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:52.939 19:23:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:52.939 19:23:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:52.939 19:23:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:52.939 19:23:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.939 19:23:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:52.939 19:23:33 -- pm/common@44 -- $ pid=3450779 00:28:52.939 19:23:33 -- pm/common@50 -- $ kill -TERM 3450779 00:28:52.939 19:23:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.940 19:23:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:52.940 19:23:33 -- pm/common@44 -- $ pid=3450781 00:28:52.940 19:23:33 -- pm/common@50 -- $ kill -TERM 3450781 00:28:52.940 19:23:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.940 19:23:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:52.940 19:23:33 -- pm/common@44 -- $ pid=3450783 00:28:52.940 19:23:33 -- pm/common@50 -- $ kill -TERM 3450783 00:28:52.940 19:23:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:52.940 19:23:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:52.940 19:23:33 -- pm/common@44 -- $ pid=3450812 00:28:52.940 19:23:33 -- pm/common@50 -- $ sudo -E kill -TERM 3450812 00:28:52.940 + [[ -n 3094024 ]] 00:28:52.940 + sudo kill 3094024 00:28:52.950 [Pipeline] } 00:28:52.968 [Pipeline] // stage 00:28:52.974 [Pipeline] } 00:28:52.992 [Pipeline] // timeout 00:28:52.997 [Pipeline] } 00:28:53.053 [Pipeline] // catchError 00:28:53.056 [Pipeline] } 00:28:53.067 [Pipeline] // wrap 00:28:53.070 [Pipeline] } 00:28:53.080 [Pipeline] // catchError 00:28:53.086 [Pipeline] stage 00:28:53.088 [Pipeline] { (Epilogue) 00:28:53.098 [Pipeline] catchError 00:28:53.099 [Pipeline] { 00:28:53.109 [Pipeline] echo 00:28:53.110 Cleanup processes 00:28:53.114 [Pipeline] sh 00:28:53.395 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:53.395 3450940 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:53.395 3451043 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:53.410 [Pipeline] sh 00:28:53.694 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:53.694 ++ grep -v 'sudo pgrep' 00:28:53.694 ++ awk '{print $1}' 00:28:53.694 + sudo kill -9 3450940 00:28:53.706 [Pipeline] sh 00:28:53.990 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:02.107 [Pipeline] sh 00:29:02.392 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:02.392 Artifacts sizes are good 00:29:02.410 [Pipeline] archiveArtifacts 00:29:02.431 Archiving artifacts 00:29:02.638 [Pipeline] sh 00:29:02.923 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:02.938 [Pipeline] cleanWs 00:29:02.946 [WS-CLEANUP] Deleting project workspace... 00:29:02.946 [WS-CLEANUP] Deferred wipeout is used... 00:29:02.952 [WS-CLEANUP] done 00:29:02.954 [Pipeline] } 00:29:03.012 [Pipeline] // catchError 00:29:03.026 [Pipeline] sh 00:29:03.299 + logger -p user.info -t JENKINS-CI 00:29:03.308 [Pipeline] } 00:29:03.324 [Pipeline] // stage 00:29:03.329 [Pipeline] } 00:29:03.347 [Pipeline] // node 00:29:03.353 [Pipeline] End of Pipeline 00:29:03.385 Finished: SUCCESS